Lecture 8: Interest Point Detection. Saad J Bedros
|
|
- Lindsay Dalton
- 6 years ago
- Views:
Transcription
1 #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu
2 Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff between sensitivity and accuracy in edge detection The parameters that we can set for edge detection and labeling include the size of the edge detection mask and the value of the threshold A larger mask or a higher gray level threshold will tend to reduce noise effects, but may result in a loss of valid edge points #2
3 Example original image (Lena)
4 Compute Gradients X-Derivative of Gaussian Y-Derivative of Gaussian Gradient Magnitude
5 Get Orientation at Each Pixel Threshold at minimum level Get orientation theta = atan2(gy, gx)
6 #6
7 Edge Thinning: Non-maximum suppression for each orientation At q, we have a maximum if the value is larger than those at both p and at r. Interpolate to get these values.
8 Today s Lecture Goal: Find features between multiple images taken from different position or time to be robustly matched Approaches: Patch/Template Matching Interest Point Detection
9 Applications Comparison Between 2 or more images Image alignment Image Stitching 3D reconstruction Object recognition ( matching ) Indexing and database retrieval Object tracking Robot navigation
10 What points would you choose? Known as: Template Matching Method, with option to use correlation as a metric
11 Correlation #11
12 Correlation #12 Use Cross Correlation to find the template in an image Maximum indicate high similarity
13 Template/Patch Matching Options What do we do for different scales of the patch? Compute multiple Templates at different sizes Match for different scales of the template What can we do to identify the shape of the pattern at different mean brightness levels Remove the mean of the template Example in 1D
14
15 We need more robust feature descriptors for matching!!!!!
16 Interest Points Local features associated with a significant change of an image property of several properties simultaneously (e.g., intensity, color, texture).
17 Why Extract Interest Points? Corresponding points (or features) between images enable the estimation of parameters describing geometric transforms between the images.
18 What if we don t know the correspondences?? ( ) ( ) feature descriptor Need to compare feature descriptors of local patches surrounding interest points =? feature descriptor
19 Properties of good features Local: features are local, robust to occlusion and clutter (no prior segmentation!). Accurate: precise localization. Invariant / Covariant Robust: noise, blur, compression, etc. do not have a big impact on the feature. Repeatable Distinctive: individual features can be matched to a large database of objects. Efficient: close to real-time performance.
20 Invariant / Covariant A function f( ) is invariant under some transformation T( ) if its value does change when the transformation is applied to its argument: if f(x) = y then f(t(x))=y A function f( ) is covariant when it commutes with the transformation T( ): if f(x) = y then f(t(x))=t(f(x))=t(y)
21 Invariance Features should be detected despite geometric or photometric changes in the image. Given two transformed versions of the same image, features should be detected in corresponding locations.
22 Example: Panorama Stitching How do we combine these two images?
23 Panorama stitching (cont d) Step 1: extract features Step 2: match features
24 Panorama stitching (cont d) Step 1: extract features Step 2: match features Step 3: align images
25 What features should we use? Use features with gradients in at least two (significantly) different orientations patches? Corners?
26 What features should we use? (cont d) (auto-correlation)
27 Corners vs Edges #27
28 Corners in Images #28
29 The Aperture Problem #29
30 Corner Detection #30
31 Corner Detection-Basic Idea #31
32 Corner Detector: Basic Idea flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions
33 Corner Detection Using Intensity: Basic Idea Image gradient has two or more dominant directions near a corner. Shifting a window in any direction should give a large change in intensity. flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions
34 Corner Detection Using Edge Detection? Edge detectors are not stable at corners. Gradient is ambiguous at corner tip. Discontinuity of gradient direction near corner.
35 Corner Types Example of L-junction, Y-junction, T-junction, Arrow-junction, and X-junction corner types
36 Mains Steps in Corner Detection 1. For each pixel in the input image, the corner operator is applied to obtain a cornerness measure for this pixel. 2. Threshold cornerness map to eliminate weak corners. 3. Apply non-maximal suppression to eliminate points whose cornerness measure is not larger than the cornerness values of all points within a certain distance.
37 Mains Steps in Corner Detection (cont d)
38 Moravec Detector (1977) Measure intensity variation at (x,y) by shifting a small window (3x3 or 5x5) by one pixel in each of the eight principle directions (horizontally, vertically, and four diagonals).
39 Moravec Detector (1977) Calculate intensity variation by taking the sum of squares of intensity differences of corresponding pixels in these two windows. 8 directions x, y in {-1,0,1} S W (-1,-1), S W (-1,0),...S W (1,1)
40 Moravec Detector (cont d) The cornerness of a pixel is the minimum intensity variation found over the eight shift directions: Cornerness(x,y) = min{s W (-1,-1), S W (-1,0),...S W (1,1)} Cornerness Map (normalized) Note response to isolated points!
41 Moravec Detector (cont d) Use Non-maximal suppression will yield the final corners. Process of Zero out all pixels that are not the maximum along the direction of the gradient (look at 1 pixel on each side)
42 Moravec Detector (cont d) Does a reasonable job in finding the majority of true corners. Edge points not in one of the eight principle directions will be assigned a relatively large cornerness value.
43 Moravec Detector (cont d) The response is anisotropic (directionally sensitive) as the intensity variation is only calculated at a discrete set of shifts (i.e., not rotationally invariant)
44 Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 Euv (, ) wxy (, ) I( x u, y v) I( x, y) I(x, y) E(u, v) E(3,2) w(x, y)
45 Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 Euv (, ) wxy (, ) I( x u, y v) I( x, y) I(x, y) E(u, v) E(0,0) w(x, y)
46 Harris Detector: Mathematics Change of intensity for the shift [u,v]: xy, 2 Euv (, ) wxy (, ) I( x u, y v) I( x, y) Window function Shifted intensity Intensity Window function w(x,y) = or 1 in window, 0 outside Gaussian
47 Harris Detector #47 Improves the Moravec operator by avoiding the use of discrete directions and discrete shifts. Uses a Gaussian window instead of a square window.
48 Harris Detector (cont d) Using first-order Taylor approximation: Reminder: Taylor expansion ( n) f ( a) f ( a) 2 f ( a) n n 1 f( x) f( a) ( x a) ( x a) ( x a) O( x ) 1! 2! n!
49 Since Harris Detector (cont d)
50 Harris Detector (cont d) A W (x,y)= f ( xi, yi) ( ) x 2 f ( xi, yi) ( ) y 2 2 x 2 matrix (auto-correlation or 2 nd order moment matrix)
51 Harris Detector General case use window function: 2 S ( x, y) w( x, y ) f( x, y ) f( x x, y y) W i i i i i i x, y i i default window function w(x,y) : 1 in window, 0 outside A W 2 wxy (, ) fx wxy (, ) fx fy 2 xy, xy, fx fx f y w( x, y) 2 2 xy, fx fy fy wxy (, ) fx fy wxy (, ) f y xy, xy,
52 Harris Detector (cont d) Harris uses a Gaussian window: w(x,y)=g(x,y,σ I ) where σ I is called the integration scale window function w(x,y) : Gaussian A W 2 wxy (, ) fx wxy (, ) fx fy 2 xy, xy, fx fx f y w( x, y) 2 2 xy, fx fy fy wxy (, ) fx fy wxy (, ) f y xy, xy,
53 Harris Detector A W 2 fx fx f y 2 xy, fx fy fy Describes the gradient distribution (i.e., local structure) inside window! Does not depend on x, y
54 Harris Detector (cont d) 0 AW R R Since M is symmetric, we have: We can visualize A W as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R Ellipse equation: x [ x y] AW const y direction of the fastest change (λ min ) -1/2 (λ max ) -1/2 direction of the slowest change
55 Harris Corner Detector (cont d) The eigenvectors of A W encode direction of intensity change. v 1 The eigenvalues of A W encode strength of intensity change. v 2
56 Harris Detector (cont d) Eigenvectors encode edge direction Eigenvalues encode edge strength direction of the fastest change (λ min ) -1/2 (λ max ) -1/2 direction of the slowest change
57 Distribution of fx and fy
58 Distribution of f x and f y (cont d)
59 Harris Detector (cont d) % 2 (assuming that λ 1 > λ 2 )
60 Harris Detector Measure of corner response: (k empirical constant, k = ) (Shi-Tomasi variation: use min(λ1,λ2) instead of R) Darya Frolova, Denis Simakov The Weizmann Institute of Science
61 Harris Detector: Mathematics R depends only on eigenvalues of M R is large for a corner R is negative with large magnitude for an edge Edge R < 0 Corner R > 0 R is small for a flat region Flat R small Edge R < 0
62 Harris Corner Detector (cont d) Classification of pixels using the eigenvalues of A W : 2 Edge 2 >> 1 Corner 1 and 2 are large, 1 ~ 2 ; intensity changes in all directions 1 and 2 are small; intensity is almost constant in all directions Flat region Edge 1 >> 2 1
63 Harris Detector (cont d) f( x, y ) i x f( xi, yi) y i x y Gxy (,, )* f( x, y) D i i Gxy (,, )* f( x, y ) D i i σ D is called the differentiation scale
64 Harris Detector The Algorithm: Find points with large corner response function R (R > threshold) Take the points of local maxima of R
65 Darya Frolova, Denis Simakov The Weizmann Institute of Science Harris corner detector algorithm Compute image gradients Ix Iy for all pixels For each pixel Compute by looping over neighbors x,y compute Find points with large corner response function R (R > threshold) Take the points of locally maximum R as the 65 detected feature points (ie, pixels where R is bigger than for all the 4 or 8 neighbors). 65
66 Harris Corner Detector (cont d) To avoid computing the eigenvalues explicitly, the Harris detector uses the following function: R(A W ) = det(a W ) α trace 2 (A W ) α: is a const which is equal to: R(A W ) = λ 1 λ 2 - α (λ 1 + λ 2 ) 2
67 Harris Corner Detector (cont d) Classification of image points using R(A W ): Edge R < 0 Corner R > 0 R(A W ) = det(a W ) α trace 2 (A W ) α: is usually between 0.04 and 0.06 Flat region R small Edge R < 0
68 Harris Corner Detector (cont d) Other functions: a 2 1 det A tra
69 Harris Corner Detector - Steps 1. Compute the horizontal and vertical (Gaussian) derivatives fx Gxy (,, D)* f( xi, yi) f y Gxy (,, D)* f( xi, yi) x y σ D is called the differentiation scale 2. Compute the three images involved in A W : A W 2 fx fx f y w( x, y) 2 x W, y W fx fy fy
70 Harris Detector - Steps 3. Convolve each of the three images with a larger Gaussian w(x,y) : 4. Determine cornerness: Gaussian σ I is called the integration scale R(A W ) = det(a W ) α trace 2 (A W ) 5. Find local maxima
71 Harris Detector - Example
72 Harris Detector - Example
73 Harris Detector - Example Compute corner response R
74 Harris Detector - Example Find points with large corner response: R>threshold
75 Harris Detector - Example Take only the points of local maxima of R
76 Harris Detector - Example Map corners on the original image
77 Harris Detector Scale Parameters The Harris detector requires two scale parameters: (i) a differentiation scale σ D for smoothing prior to the computation of image derivatives, & (ii) an integration scale σ I for defining the size of the Gaussian window (i.e., integrating derivative responses). A W (x,y) A W (x,y,σ I,σ D ) Typically, σ I =γσ D
78 Invariance to Geometric/Photometric Changes Is the Harris detector invariant to geometric and photometric changes? Geometric Rotation Scale Affine Photometric Affine intensity change: I(x,y) a I(x,y) + b
79 Harris Detector: Rotation Invariance Rotation Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation
80 Harris Detector: Rotation Invariance (cont d)
81 Harris Detector: Photometric Changes Affine intensity change Only derivatives are used => invariance to intensity shift I(x,y) I (x,y) + b Intensity scale: I(x,y) a I(x,y) R R threshold x (image coordinate) x (image coordinate) Partially invariant to affine intensity change
82 Harris Detector: Scale Invariance Scaling Corner All points will be classified as edges Not invariant to scaling (and affine transforms)
83 Harris Detector: Repeatability In a comparative study of different interest point detectors, Harris was shown to be the most repeatable and most informative. Repeatability = # correspondences min( m, m ) 1 2 best results: σ I =1, σ D =2 C. Schmid, R. Mohr, and C. Bauckhage, "Evaluation of Interest Point Detectors", International Journal of Computer Vision 37(2), , 2000.
84 Harris Detector: Disadvantages Sensitive to: Scale change Significant viewpoint change Significant contrast change
85 How to handle scale changes? A W must be adapted to scale changes. If the scale change is known, we can adapt the Harris detector to the scale change (i.e., set properly σ I, σ D ). What if the scale change is unknown?
86 Multi-scale Harris Detector Detects interest points at varying scales. R(A W ) = det(a W (x,y,σ I,σ D )) α trace 2 (A W (x,y,σ I,σ D )) scale σ n =k n σ σ D = σ n σ I =γσ D σ n y Harris x
87 How to cope with transformations? Exhaustive search Invariance Robustness
88 Multi-scale approach Exhaustive search Slide from T. Tuytelaars ECCV 2006 tutorial
89 Multi-scale approach Exhaustive search
90 Multi-scale approach Exhaustive search
91 Multi-scale approach Exhaustive search
92 How to handle scale changes? Not a good idea! There will be many points representing the same structure, complicating matching! Note that point locations shift as scale increases. The size of the circle corresponds to the scale at which the point was detected
93 How to handle scale changes? (cont d) How do we choose corresponding circles independently in each image?
94 How to handle scale changes? (cont d) Alternatively, use scale selection to find the characteristic scale of each feature. Characteristic scale depends on the feature s spatial extent (i.e., local neighborhood of pixels). scale selection scale selection
95 How to handle scale changes? Only a subset of the points computed in scale space are selected! The size of the circles corresponds to the scale at which the point was selected.
96 Automatic Scale Selection Design a function F(x,σ n ) which provides some local measure. Select points at which F(x,σ n ) is maximal over σ n. F(x,σ n ) max of F(x,σ n ) corresponds to characteristic scale! σ n T. Lindeberg, "Feature detection with automatic scale selection" International Journal of Computer Vision, vol. 30, no. 2, pp , 1998.
97 Invariance Extract patch from each image individually
98 Automatic scale selection Solution: Design a function on the region, which is scale invariant (the same for corresponding regions, even if they are at different scales) Example: average intensity. For corresponding regions (even of different sizes) it will be the same. For a point in one image, we can consider it as a function of region size (patch width) f Image 1 f Image 2 scale = 1/2 region size region size
99 Automatic scale selection Common approach: Take a local maximum of this function Observation: region size, for which the maximum is achieved, should be invariant to image scale. Important: this scale invariant region size is found in each image independently! f Image 1 f Image 2 scale = 1/2 s 1 region size s 2 region size
100 Automatic Scale Selection f I (, )) ( (, i x f I x )) 1 im i im ( 1 Same operator responses if the patch contains the same image up to scale factor. 100
101 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x i i m f ( I 1 ( x, )) m 101
102 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x i i m f ( I 1 ( x, )) m 102
103 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x i i m f ( I 1 ( x, )) m 103
104 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x i i m f ( I 1 ( x, )) m 104
105 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x i i m f ( I 1 ( x, )) m 105
106 Automatic Scale Selection Function responses for increasing scale (scale signature) f I (, )) i i i i m f ( I 1 ( x, )) m ( 1 x 106
107 Scale selection Use the scale determined by detector to compute descriptor in a normalized frame
108 Automatic Scale Selection (cont d) Characteristic scale is relatively independent of the image scale. Scale factor: 2.5 The ratio of the scale values corresponding to the max values, is equal to the scale factor between the images. Scale selection allows for finding spatial extend that is covariant with the image transformation.
109 Automatic Scale Selection What local measures should we use? Should be rotation invariant Should have one stable sharp peak
110 How should we choose F(x,σ n )? Typically, F(x,σ n ) is defined using derivatives, e.g.: Square gradient L x L x LoG L x L x : ( x(, ) y(, )) 2 : ( xx (, ) yy (, )) DoG : I( x)* G( ) I( x)* G( ) n 1 n :det( 2 W) ( W) Harris function A trace A C. Schmid, R. Mohr, and C. Bauckhage, "Evaluation of Interest Point Detectors", International Journal of Computer Vision, 37(2), pp , 2000.
111 Review Patch/Template Matching #111 Interest Points Corner Detection: Harris Detector Properties of Harris Detector Scaling approach
Lecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:
More informationBlobs & Scale Invariance
Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos
More informationFeature extraction: Corners and blobs
Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over
More informationEdges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise
Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationRecap: edge detection. Source: D. Lowe, L. Fei-Fei
Recap: edge detection Source: D. Lowe, L. Fei-Fei Canny edge detector 1. Filter image with x, y derivatives of Gaussian 2. Find magnitude and orientation of gradient 3. Non-maximum suppression: Thin multi-pixel
More informationAdvances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.
6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;
More informationCorners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros
Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?
More informationImage Analysis. Feature extraction: corners and blobs
Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).
More informationLecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?
Lecture 1 Why extract eatures? Motivation: panorama stitching We have two images how do we combine them? Local Feature Detection Guest lecturer: Alex Berg Reading: Harris and Stephens David Lowe IJCV We
More informationInvariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching
Invariant local eatures Invariant Local Features Tuesday, February 6 Subset o local eature types designed to be invariant to Scale Translation Rotation Aine transormations Illumination 1) Detect distinctive
More informationKeypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC
Kepoint etraction: Corners 9300 Harris Corners Pkw Charlotte NC Wh etract kepoints? Motivation: panorama stitching We have two images how do we combine them? Wh etract kepoints? Motivation: panorama stitching
More informationINTEREST POINTS AT DIFFERENT SCALES
INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions
More informationVlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course
More informationLocal Features (contd.)
Motivation Local Features (contd.) Readings: Mikolajczyk and Schmid; F&P Ch 10 Feature points are used also or: Image alignment (homography, undamental matrix) 3D reconstruction Motion tracking Object
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationCS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015
CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationCS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to
More informationDetectors part II Descriptors
EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:
More informationCorner detection: the basic idea
Corner detection: the basic idea At a corner, shifting a window in any direction should give a large change in intensity flat region: no change in all directions edge : no change along the edge direction
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 5: Feature descriptors and matching Szeliski: 4.1 Reading Announcements Project 1 Artifacts due tomorrow, Friday 2/17, at 11:59pm Project 2 will be released
More informationImage matching. by Diva Sian. by swashford
Image matching by Diva Sian by swashford Harder case by Diva Sian by scgbt Invariant local features Find features that are invariant to transformations geometric invariance: translation, rotation, scale
More informationLecture 6: Finding Features (part 1/2)
Lecture 6: Finding Features (part 1/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 6 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon
More informationLecture 7: Edge Detection
#1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection
More informationSURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:
SURF Features Jacky Baltes Dept. of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky Salient Spatial Features Trying to find interest points Points
More informationCSE 473/573 Computer Vision and Image Processing (CVIP)
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for
More informationBlob Detection CSC 767
Blob Detection CSC 767 Blob detection Slides: S. Lazebnik Feature detection with scale selection We want to extract features with characteristic scale that is covariant with the image transformation Blob
More informationSIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.
SIFT keypoint detection D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (), pp. 91-110, 004. Keypoint detection with scale selection We want to extract keypoints with characteristic
More informationExtract useful building blocks: blobs. the same image like for the corners
Extract useful building blocks: blobs the same image like for the corners Here were the corners... Blob detection in 2D Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D 2 g=
More informationLoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D
Achieving scale covariance Blobs (and scale selection) Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region
More informationAchieving scale covariance
Achieving scale covariance Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant
More informationInstance-level l recognition. Cordelia Schmid INRIA
nstance-level l recognition Cordelia Schmid NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these landmars...in
More informationLecture 7: Finding Features (part 2/2)
Lecture 7: Finding Features (part 2/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 7 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon
More informationHarris Corner Detector
Multimedia Computing: Algorithms, Systems, and Applications: Feature Extraction By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationLecture 7: Finding Features (part 2/2)
Lecture 7: Finding Features (part 2/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? Local invariant features MoPvaPon Requirements, invariances
More informationInstance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble
nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors
More informationOverview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and
More informationInstance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble
nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors
More informationMachine vision. Summary # 4. The mask for Laplacian is given
1 Machine vision Summary # 4 The mask for Laplacian is given L = 0 1 0 1 4 1 (6) 0 1 0 Another Laplacian mask that gives more importance to the center element is L = 1 1 1 1 8 1 (7) 1 1 1 Note that the
More informationLecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008
Lecture 6: Edge Detection CAP 5415: Computer Vision Fall 2008 Announcements PS 2 is available Please read it by Thursday During Thursday lecture, I will be going over it in some detail Monday - Computer
More informationInstance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble
nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest t points + SSD ZNCC SFT Scale & affine invariant interest point detectors
More informationOverview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different detectors Region descriptors and their performance
More informationMachine vision, spring 2018 Summary 4
Machine vision Summary # 4 The mask for Laplacian is given L = 4 (6) Another Laplacian mask that gives more importance to the center element is given by L = 8 (7) Note that the sum of the elements in the
More informationScale & Affine Invariant Interest Point Detectors
Scale & Affine Invariant Interest Point Detectors KRYSTIAN MIKOLAJCZYK AND CORDELIA SCHMID [2004] Shreyas Saxena Gurkirit Singh 23/11/2012 Introduction We are interested in finding interest points. What
More informationSIFT: Scale Invariant Feature Transform
1 SIFT: Scale Invariant Feature Transform With slides from Sebastian Thrun Stanford CS223B Computer Vision, Winter 2006 3 Pattern Recognition Want to find in here SIFT Invariances: Scaling Rotation Illumination
More informationScale-space image processing
Scale-space image processing Corresponding image features can appear at different scales Like shift-invariance, scale-invariance of image processing algorithms is often desirable. Scale-space representation
More informationCEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.
CEE598 - Visual Sensing for Civil nfrastructure Eng. & Mgmt. Session 9- mage Detectors, Part Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering Lab e-mail:
More informationProperties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context
Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Silvio Savarese Lecture 10-16-Feb-15 From the 3D to 2D & vice versa
More information6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005
6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE
SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE Overview Motivation of Work Overview of Algorithm Scale Space and Difference of Gaussian Keypoint Localization Orientation Assignment Descriptor Building
More informationScale & Affine Invariant Interest Point Detectors
Scale & Affine Invariant Interest Point Detectors Krystian Mikolajczyk and Cordelia Schmid Presented by Hunter Brown & Gaurav Pandey, February 19, 2009 Roadmap: Motivation Scale Invariant Detector Affine
More informationEE 6882 Visual Search Engine
EE 6882 Visual Search Engine Prof. Shih Fu Chang, Feb. 13 th 2012 Lecture #4 Local Feature Matching Bag of Word image representation: coding and pooling (Many slides from A. Efors, W. Freeman, C. Kambhamettu,
More informationPerception III: Filtering, Edges, and Point-features
Perception : Filtering, Edges, and Point-features Davide Scaramuzza Universit of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Toda s outline mage filtering Smoothing Edge detection
More informationOptical Flow, Motion Segmentation, Feature Tracking
BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature
More informationLecture 05 Point Feature Detection and Matching
nstitute of nformatics nstitute of Neuroinformatics Lecture 05 Point Feature Detection and Matching Davide Scaramuzza 1 Lab Eercise 3 - Toda afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Wor description:
More informationFiltering and Edge Detection
Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity
More informationFeature extraction: Corners and blobs
Featre etraction: Corners and blobs Wh etract featres? Motiation: panorama stitching We hae two images how do we combine them? Wh etract featres? Motiation: panorama stitching We hae two images how do
More informationFeature Tracking. 2/27/12 ECEn 631
Corner Extraction Feature Tracking Mostly for multi-frame applications Object Tracking Motion detection Image matching Image mosaicing 3D modeling Object recognition Homography estimation... Global Features
More informationTRACKING and DETECTION in COMPUTER VISION Filtering and edge detection
Technischen Universität München Winter Semester 0/0 TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Slobodan Ilić Overview Image formation Convolution Non-liner filtering: Median
More informationWavelet-based Salient Points with Scale Information for Classification
Wavelet-based Salient Points with Scale Information for Classification Alexandra Teynor and Hans Burkhardt Department of Computer Science, Albert-Ludwigs-Universität Freiburg, Germany {teynor, Hans.Burkhardt}@informatik.uni-freiburg.de
More informationInterest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator
Interest Operators All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator SIFT interest point detector and region descriptor Kadir Entrop Detector
More informationarxiv: v1 [cs.cv] 10 Feb 2016
GABOR WAVELETS IN IMAGE PROCESSING David Bařina Doctoral Degree Programme (2), FIT BUT E-mail: xbarin2@stud.fit.vutbr.cz Supervised by: Pavel Zemčík E-mail: zemcik@fit.vutbr.cz arxiv:162.338v1 [cs.cv]
More informationOverview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and
More informationEdge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news
Edge Detection Introduction to Computer Vision CS / ECE 8B Thursday, April, 004 Edge detection (HO #5) Edge detection is a local area operator that seeks to find significant, meaningful changes in image
More informationSlide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2
Corner Detection 2D Image Features Corners are important two dimensional features. Two dimensional image features are interesting local structures. They include junctions of dierent types Slide 3 They
More informationAdvanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching
Advanced Features Jana Kosecka Slides from: S. Thurn, D. Lowe, Forsyth and Ponce Advanced Features: Topics Advanced features and feature matching Template matching SIFT features Haar features 2 1 Features
More informationFeature detection.
Feature detection Kim Steenstrup Pedersen kimstp@itu.dk The IT University of Copenhagen Feature detection, The IT University of Copenhagen p.1/20 What is a feature? Features can be thought of as symbolic
More informationMultimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis
Previous Lecture Multimedia Databases Texture-Based Image Retrieval Low Level Features Tamura Measure, Random Field Model High-Level Features Fourier-Transform, Wavelets Wolf-Tilo Balke Silviu Homoceanu
More informationObject Recognition Using Local Characterisation and Zernike Moments
Object Recognition Using Local Characterisation and Zernike Moments A. Choksuriwong, H. Laurent, C. Rosenberger, and C. Maaoui Laboratoire Vision et Robotique - UPRES EA 2078, ENSI de Bourges - Université
More informationMultimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig
Multimedia Databases Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Previous Lecture Texture-Based Image Retrieval Low
More informationLaplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters
Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.
More informationEdge Detection. CS 650: Computer Vision
CS 650: Computer Vision Edges and Gradients Edge: local indication of an object transition Edge detection: local operators that find edges (usually involves convolution) Local intensity transitions are
More informationShape of Gaussians as Feature Descriptors
Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and
More informationTemplates, Image Pyramids, and Filter Banks
Templates, Image Pyramids, and Filter Banks 09/9/ Computer Vision James Hays, Brown Slides: Hoiem and others Review. Match the spatial domain image to the Fourier magnitude image 2 3 4 5 B A C D E Slide:
More informationMultimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis
4 Shape-based Features Multimedia Databases Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Multiresolution Analysis
More informationOptical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016
Optical flow Subhransu Maji CMPSC 670: Computer Vision October 20, 2016 Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes CMPSC 670 2 Motion and perceptual organization Sometimes,
More informationCorner. Corners are the intersections of two edges of sufficiently different orientations.
2D Image Features Two dimensional image features are interesting local structures. They include junctions of different types like Y, T, X, and L. Much of the work on 2D features focuses on junction L,
More informationImage Processing 1 (IP1) Bildverarbeitung 1
MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV KOGS Image Processing 1 IP1 Bildverarbeitung 1 Lecture : Object Recognition Winter Semester 015/16 Slides: Prof. Bernd Neumann Slightly revised
More information* h + = Lec 05: Interesting Points Detection. Image Analysis & Retrieval. Outline. Image Filtering. Recap of Lec 04 Image Filtering Edge Features
age Analsis & Retrieval Outline CS/EE 5590 Special Topics (Class ds: 44873, 44874) Fall 06, M/W 4-5:5p@Bloch 00 Lec 05: nteresting Points Detection Recap of Lec 04 age Filtering Edge Features Hoework Harris
More informationInstance-level l recognition. Cordelia Schmid & Josef Sivic INRIA
nstance-level l recognition Cordelia Schmid & Josef Sivic NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these
More informationSURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION
SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION Peter M. Roth and Martin Winter Inst. for Computer Graphics and Vision Graz University of Technology, Austria Technical Report ICG TR 01/08 Graz,
More informationRotational Invariants for Wide-baseline Stereo
Rotational Invariants for Wide-baseline Stereo Jiří Matas, Petr Bílek, Ondřej Chum Centre for Machine Perception Czech Technical University, Department of Cybernetics Karlovo namesti 13, Prague, Czech
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images
More informationRapid Object Recognition from Discriminative Regions of Interest
Rapid Object Recognition from Discriminative Regions of Interest Gerald Fritz, Christin Seifert, Lucas Paletta JOANNEUM RESEARCH Institute of Digital Image Processing Wastiangasse 6, A-81 Graz, Austria
More informationImage Characteristics
1 Image Characteristics Image Mean I I av = i i j I( i, j 1 j) I I NEW (x,y)=i(x,y)-b x x Changing the image mean Image Contrast The contrast definition of the entire image is ambiguous In general it is
More informationMotion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan
Motion estimation Digital Visual Effects Yung-Yu Chuang with slides b Michael Black and P. Anandan Motion estimation Parametric motion image alignment Tracking Optical flow Parametric motion direct method
More informationLecture 04 Image Filtering
Institute of Informatics Institute of Neuroinformatics Lecture 04 Image Filtering Davide Scaramuzza 1 Lab Exercise 2 - Today afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Work description: your first
More informationTaking derivative by convolution
Taking derivative by convolution Partial derivatives with convolution For 2D function f(x,y), the partial derivative is: For discrete data, we can approximate using finite differences: To implement above
More informationMotion Estimation (I) Ce Liu Microsoft Research New England
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationMaximally Stable Local Description for Scale Selection
Maximally Stable Local Description for Scale Selection Gyuri Dorkó and Cordelia Schmid INRIA Rhône-Alpes, 655 Avenue de l Europe, 38334 Montbonnot, France {gyuri.dorko,cordelia.schmid}@inrialpes.fr Abstract.
More informationRoadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications
Edge Detection Roadmap Introduction to image analysis (computer vision) Its connection with psychology and neuroscience Why is image analysis difficult? Theory of edge detection Gradient operator Advanced
More informationThe state of the art and beyond
Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching
More informationGaussian derivatives
Gaussian derivatives UCU Winter School 2017 James Pritts Czech Tecnical University January 16, 2017 1 Images taken from Noah Snavely s and Robert Collins s course notes Definition An image (grayscale)
More informationMotion Estimation (I)
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationVIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION. Jingyu Yan and Marc Pollefeys
VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION Jingyu Yan and Marc Pollefeys {yan,marc}@cs.unc.edu The University of North Carolina at Chapel Hill Department of Computer Science Chapel
More informationLow-level Image Processing
Low-level Image Processing In-Place Covariance Operators for Computer Vision Terry Caelli and Mark Ollila School of Computing, Curtin University of Technology, Perth, Western Australia, Box U 1987, Emaihtmc@cs.mu.oz.au
More informationMultiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition
Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition Esa Rahtu Mikko Salo Janne Heikkilä Department of Electrical and Information Engineering P.O. Box 4500, 90014 University of
More informationLinear Diffusion. E9 242 STIP- R. Venkatesh Babu IISc
Linear Diffusion Derivation of Heat equation Consider a 2D hot plate with Initial temperature profile I 0 (x, y) Uniform (isotropic) conduction coefficient c Unit thickness (along z) Problem: What is temperature
More informationSIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica
SIFT, GLOH, SURF descriptors Dipartimento di Sistemi e Informatica Invariant local descriptor: Useful for Object RecogniAon and Tracking. Robot LocalizaAon and Mapping. Image RegistraAon and SAtching.
More information