Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Similar documents
SIFT: Scale Invariant Feature Transform

Harris Corner Detector

INTEREST POINTS AT DIFFERENT SCALES

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

CSE 473/573 Computer Vision and Image Processing (CVIP)

Lecture 8: Interest Point Detection. Saad J Bedros

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Feature detectors and descriptors. Fei-Fei Li

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

Feature detectors and descriptors. Fei-Fei Li

Blob Detection CSC 767

Scale-space image processing

Achieving scale covariance

CS5670: Computer Vision

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Detectors part II Descriptors

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Feature extraction: Corners and blobs

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

Blobs & Scale Invariance

Image Processing 1 (IP1) Bildverarbeitung 1

ECE Digital Image Processing and Introduction to Computer Vision

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

Image Analysis. Feature extraction: corners and blobs

Edge Detection. CS 650: Computer Vision

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline

Image matching. by Diva Sian. by swashford

Lecture 8: Interest Point Detection. Saad J Bedros

Templates, Image Pyramids, and Filter Banks

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Edge Detection. Computer Vision P. Schrater Spring 2003

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Extract useful building blocks: blobs. the same image like for the corners

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2)

Scale & Affine Invariant Interest Point Detectors

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

CITS 4402 Computer Vision

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Computer Vision Lecture 3

Lecture 7: Edge Detection

Filtering and Edge Detection

Introduction to Computer Vision

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Additional Pointers. Introduction to Computer Vision. Convolution. Area operations: Linear filtering

Lecture 3: Linear Filters

Lecture 3: Linear Filters

Taking derivative by convolution

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

Orientation Map Based Palmprint Recognition

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Instance-level l recognition. Cordelia Schmid INRIA

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Introduction to Computer Vision. 2D Linear Systems

Lecture 04 Image Filtering

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news

I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 4. Filtering

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Visual Object Recognition

ECE 468: Digital Image Processing. Lecture 8

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

Corner detection: the basic idea

Given a feature in I 1, how to find the best match in I 2?

EE 6882 Visual Search Engine

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Wavelet-based Salient Points with Scale Information for Classification

Maximally Stable Local Description for Scale Selection

Lec 12 Review of Part I: (Hand-crafted) Features and Classifiers in Image Classification

Robert Collins CSE598G Mean-Shift Blob Tracking through Scale Space

Feature Extraction and Image Processing

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q

Scale & Affine Invariant Interest Point Detectors

Multiscale Image Transforms

Feature detection.

Perception III: Filtering, Edges, and Point-features

CS4495/6495 Introduction to Computer Vision. 2A-L6 Edge detection: 2D operators

Machine vision. Summary # 4. The mask for Laplacian is given

Introduction to Computer Vision

CITS 4402 Computer Vision

Deformation and Viewpoint Invariant Color Histograms

Machine vision, spring 2018 Summary 4

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

Mixture Models and EM

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)

Optical Flow, Motion Segmentation, Feature Tracking

Edge Detection. Image Processing - Computer Vision

Gaussian derivatives

KAZE Features. 1 Introduction. Pablo Fernández Alcantarilla 1, Adrien Bartoli 1, and Andrew J. Davison 2

Transcription:

Advanced Features Jana Kosecka Slides from: S. Thurn, D. Lowe, Forsyth and Ponce Advanced Features: Topics Advanced features and feature matching Template matching SIFT features Haar features 2 1

Features for Object Detection/Recognition Want to find in here 3 Template Convolution Pick a template - rectangular/square region of an image Goal - find it in the same image/images of the same scene from Different viewpoint 4 2

Convolution with Templates % read image im = imread('bridge.jpg'); bw = double(im(:,:,1))./ 255; imshow(bw) % apply FFT FFTim = fft2(bw); bw2 = real(ifft2(fftim)); imshow(bw2) % select an image patch patch = bw(221:240,351:370); imshow(patch) patch = patch - (sum(sum(patch)) / size(patch,1) / size(patch, 2)); kernel=zeros(size(bw)); kernel(1:size(patch,1),1:size(patch,2)) = patch; FFTkernel = fft2(kernel); % define a kernel kernel=zeros(size(bw)); kernel(1, 1) = 1; kernel(1, 2) = -1; FFTkernel = fft2(kernel); % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = real(ifft2(fftresult)); imshow(result) % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = max(0, real(ifft2(fftresult))); result = result./ max(max(result)); result = (result.^ 1 > 0.5); imshow(result) % alternative convolution imshow(conv2(bw, patch, 'same')) 5 Template Convolution 6 3

Feature Matching with templates Given a template - find the region in the image with the highest matching score Matching score - result of convolution is maximal (or use SSD, SAD, NSS similarity measures) Given rotated, scaled, perspectively distorted version of the image Can we find the same patch (we want invariance!) Scaling - NO Rotation - NO Illumination - depends Perspective Projection - NO CS223b 7 Efficient Template matching Convolution theorem: F( I g) = F( I) F( g) Fourier Transform of g: F( g( x, y))( u, v) = g( x, y) exp{ i2π ( ux + vy)} dx dy R 2 F is invertible Convolution is a spatial domain is a multiplication in frequency domain - often more efficient when fast FFT available 8 4

Convolution with Templates % read image im = imread('bridge.jpg'); bw = double(im(:,:,1))./ 256;; imshow(bw) % apply FFT FFTim = fft2(bw); bw2 = real(ifft2(fftim)); imshow(bw2) % define a kernel kernel=zeros(size(bw)); kernel(1, 1) = 1; kernel(1, 2) = -1; FFTkernel = fft2(kernel); % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = real(ifft2(fftresult)); imshow(result) % select an image patch patch = bw(221:240,351:370); imshow(patch) patch = patch - (sum(sum(patch)) / size(patch,1) / size(patch, 2)); kernel=zeros(size(bw)); kernel(1:size(patch,1),1:size(patch,2)) = patch; FFTkernel = fft2(kernel); % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = max(0, real(ifft2(fftresult))); result = result./ max(max(result)); result = (result.^ 1 > 0.5); imshow(result) % alternative convolution imshow(conv2(bw, patch, 'same')) 9 Feature Matching with templates Given a template - find the region in the image with the highest matching score Matching score - result of convolution is maximal (or use SSD, SAD, NSS similarity measures) Given rotated, scaled, perspectively distorted version of the image Can we find the same patch (we want invariance!) Scaling Rotation Illumination Perspective Projection 10 5

Improved Invariance Handling Want to find in here 11 SIFT Features Invariances: Scaling Rotation Illumination Deformation Provides Good localization Yes Yes Yes Not really Yes 12 6

SIFT Reference Distinctive image features from scale-invariant keypoints. David G. Lowe, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110. SIFT = Scale Invariant Feature Transform 13 Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features 14 7

Advantages of invariant local features Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness 15 SIFT On-A-Slide 1. Enforce invariance to scale: Compute Gaussian difference max, for many different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve rotation invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 16 8

Finding Keypoints (Corners) Idea: Find Corners, but scale invariance Approach: Run linear filter (difference of Gaussians) Do this at different resolutions of image pyramid 17 Difference of Gaussians Minus Equals Approximates Laplacian (see filtering lecture) 18 9

Difference of Gaussians surf(fspecial('gaussian',40,4)) surf(fspecial('gaussian',40,8)) surf(fspecial('gaussian',40,8) - fspecial('gaussian',40,4)) im =imread('bridge.jpg'); bw = double(im(:,:,1)) / 256; for i = 1 : 10 gaussd = fspecial('gaussian',40,2*i) - fspecial('gaussian',40,i); res = abs(conv2(bw, gaussd, 'same')); res = res / max(max(res)); imshow(res) ; title(['\bf i = ' num2str(i)]); drawnow end 19 Gaussian Kernel Size i=1 20 10

Gaussian Kernel Size i=2 21 Gaussian Kernel Size i=3 22 11

Gaussian Kernel Size i=4 23 Gaussian Kernel Size i=5 24 12

Gaussian Kernel Size i=6 25 Gaussian Kernel Size i=7 26 13

Gaussian Kernel Size i=8 27 Gaussian Kernel Size i=9 28 14

Resample Blur Subtract Gaussian Kernel Size i=10 29 Key point localization In D. Lowe s paper image is decomposed to octaves (consecutively sub-sampled versions of the same image) Instead of convolving with large kernels within an octave kernels are kept the same Detect maxima and minima of differenceof-gaussian in scale space Look for 3x3 neighbourhood in scale and space 30 15

Example of keypoint detection (a) 233x189 image (b) 832 DOG extrema (c) 729 above threshold 31 SIFT On-A-Slide 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve rotation invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 32 16

Example of keypoint detection Threshold on value at DOG peak and on ratio of principle curvatures (Harris approach) (c) 729 left after peak value threshold (from 832) (d) 536 left after testing ratio of principle curvatures 33 SIFT On-A-Slide 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve rotation invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 34 17

Select canonical orientation Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale, orientation) 0 2π 35 SIFT On-A-Slide 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve rotation invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 36 18

SIFT vector formation Thresholded image gradients are sampled over 16x16 array of locations in scale space Create array of orientation histograms 8 orientations x 4x4 histogram array = 128 dimensions CS223b 37 Nearest-neighbor matching to feature database Hypotheses are generated by approximate nearest neighbor matching of each feature to vectors in the database SIFT use best-bin-first (Beis & Lowe, 97) modification to k-d tree algorithm Use heap data structure to identify bins in order by their distance from query point Result: Can give speedup by factor of 1000 while finding nearest neighbor (of interest) 95% of the time 38 19

3D Object Recognition Extract outlines with background subtraction 39 3D Object Recognition Only 3 keys are needed for recognition, so extra keys provide robustness Affine model is no longer as accurate 40 20

Recognition under occlusion 41 Test of illumination invariance Same image under differing illumination 273 keys verified in final match 42 21

Examples of view interpolation 43 Location recognition 44 22

SIFT Invariances: Scaling Yes Rotation Yes Illumination Yes Perspective Projection Maybe Provides Good localization Yes 45 SOFTWARE for Matlab (at UCLA, Oxford) www.vlfeat.org 46 23

SIFT demos Run sift_compile sift_demo2 47 24