Lecture 8: Interest Point Detection. Saad J Bedros

Size: px
Start display at page:

Download "Lecture 8: Interest Point Detection. Saad J Bedros"

Transcription

1 #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu

2 Review of Edge Detectors #2

3 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal: Find Same features between multiple images taken from different position or time

4 Applications Image alignment Image Stitching 3D reconstruction Object recognition Indexing and database retrieval Object tracking Robot navigation

5 Image Alignment #5 Homography Ransac

6 What points would you choose? Known as: Template Matching Method, with option to use correlation as a metric

7 Correlation #7

8 Correlation #8 Use Cross Correlation to find the template in an image, Maximum indicate high similarity

9

10 We need more robust feature descriptors for matching!!!!!

11 Interest Points Local features associated with a significant change of an image property of several properties simultaneously (e.g., intensity, color, texture).

12 Why Extract Interest Points? Corresponding points (or features) between images enable the estimation of parameters describing geometric transforms between the images.

13 What if we don t know the correspondences?? ( ) ( ) feature descriptor Need to compare feature descriptors of local patches surrounding interest points =? feature descriptor

14 Properties of good features Local: features are local, robust to occlusion and clutter (no prior segmentation!). Accurate: precise localization. Invariant (or covariant) Robust: noise, blur, compression, etc. do not have a big impact on the feature. Repeatable Distinctive: individual features can be matched to a large database of objects. Efficient: close to real-time performance.

15 Invariant / Covariant A function f( ) is invariant under some transformation T( ) if its value does change when the transformation is applied to its argument: if f(x) = y then f(t(x))=y A function f( ) is covariant when it commutes with the transformation T( ): if f(x) = y then f(t(x))=t(f(x))=t(y)

16 Invariance Features should be detected despite geometric or photometric changes in the image. Given two transformed versions of the same image, features should be detected in corresponding locations.

17 Example: Panorama Stitching How do we combine these two images?

18 Panorama stitching (cont d) Step 1: extract features Step 2: match features

19 Panorama stitching (cont d) Step 1: extract features Step 2: match features Step 3: align images

20 What features should we use? Use features with gradients in at least two (significantly) different orientations patches? Corners?

21 What features should we use? (cont d) (auto-correlation)

22 The Aperture Problem #23

23 Corners vs Edges #24

24 Corner Detection #25

25 Corner Detection-Basic Idea #26

26 Corner Detector: Basic Idea #27

27 Mains Steps in Corner Detection 1. For each pixel in the input image, the corner operator is applied to obtain a cornerness measure for this pixel. 2. Threshold cornerness map to eliminate weak corners. 3. Apply non-maximal suppression to eliminate points whose cornerness measure is not larger than the cornerness values of all points within a certain distance.

28 Mains Steps in Corner Detection (cont d)

29 Corner Types Example of L-junction, Y-junction, T-junction, Arrow-junction, and X-junction corner types

30 Corner Detection Using Edge Detection? Edge detectors are not stable at corners. Gradient is ambiguous at corner tip. Discontinuity of gradient direction near corner.

31 Corner Detection Using Intensity: Basic Idea Image gradient has two or more dominant directions near a corner. Shifting a window in any direction should give a large change in intensity. flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions

32 Moravec Detector (1977) Measure intensity variation at (x,y) by shifting a small window (3x3 or 5x5) by one pixel in each of the eight principle directions (horizontally, vertically, and four diagonals).

33 Moravec Detector (1977) Calculate intensity variation by taking the sum of squares of intensity differences of corresponding pixels in these two windows. 8 directions x, y in {-1,0,1} S W (-1,-1), S W (-1,0),...S W (1,1)

34 Moravec Detector (cont d) The cornerness of a pixel is the minimum intensity variation found over the eight shift directions: Cornerness(x,y) = min{s W (-1,-1), S W (-1,0),...S W (1,1)} Cornerness Map (normalized) Note response to isolated points!

35 Moravec Detector (cont d) Non-maximal suppression will yield the final corners.

36 Moravec Detector (cont d) Does a reasonable job in finding the majority of true corners. Edge points not in one of the eight principle directions will be assigned a relatively large cornerness value.

37 Moravec Detector (cont d) The response is anisotropic as the intensity variation is only calculated at a discrete set of shifts (i.e., not rotationally invariant)

38 Harris Detector #39 Improves the Moravec operator by avoiding the use of discrete directions and discrete shifts. Uses a Gaussian window instead of a square window.

39 Harris Detector #40

40 Harris Detector (cont d) Using first-order Taylor approximation: ( n) Reminder: f ( ataylor ) f ( expansion a) f ( a) n n+ 2 1 f( x) = f( a) + ( x a) + ( x a) + + ( x a) + O( x ) 1! 2! n!

41 Since Harris Detector (cont d)

42 Harris Detector (cont d) A W (x,y)= f( xi, yi) ( ) x 2 f( xi, yi) ( ) y 2 2 x 2 matrix (auto-correlation or 2 nd order moment matrix)

43 Harris Detector General case use window function: [ ] 2 S ( x, y) = w( x, y ) f( x, y ) f( x x, y y) W i i i i i i x, y i i default window function w(x,y) : 1 in window, 0 outside A W 2 wxy (, ) fx wxy (, ) fx fy 2 xy, xy, fx fx f y = = wxy (, ) 2 2 xy, fx fy fy wxy (, ) fx fy wxy (, ) f y xy, xy,

44 Harris Detector (cont d) Harris uses a Gaussian window: w(x,y)=g(x,y,σ I ) where σ I is called the integration scale window function w(x,y) : Gaussian A W 2 wxy (, ) fx wxy (, ) fx fy 2 xy, xy, fx fx f y = = wxy (, ) 2 2 xy, fx fy fy wxy (, ) fx fy wxy (, ) f y xy, xy,

45 Harris Detector A W 2 fx fx f y = 2 xy, fx fy fy Describes the gradient distribution (i.e., local structure) inside window! Does not depend on x, y

46 Harris Detector (cont d) λ 0 AW = R R 0 λ Since M is symmetric, we have: We can visualize A W as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R Ellipse equation: x [ x y] AW = const y direction of the fastest change (λ min ) -1/2 (λ max ) -1/2 direction of the slowest change

47 Harris Detector (cont d) Eigenvectors encode edge direction Eigenvalues encode edge strength direction of the fastest change (λ min ) -1/2 (λ max ) -1/2 direction of the slowest change

48 Distribution of fx and fy

49 Distribution of f x and f y (cont d)

50 Harris Detector (cont d) Classification of image points using eigenvalues of A W : λ 2 Edge λ 2 >> λ 1 Corner λ 1 and λ 2 are large, λ 1 ~ λ 2 ; E increases in all directions λ 1 and λ 2 are small; S W is almost constant in all directions Flat region Edge λ 1 >> λ 2 λ 1

51 Harris Detector (cont d) % λ2 (assuming that λ 1 > λ 2 )

52 Harris Detector Measure of corner response: Darya Frolova, Denis Simakov The Weizmann Institute of Science (k empirical constant, k = ) (Shi-Tomasi variation: use min(λ1,λ2) instead of R)

53 Harris Detector: Mathematics R depends only on eigenvalues of M R is large for a corner R is negative with large magnitude for an edge Edge R < 0 Corner R > 0 R is small for a flat region Flat R small Edge R < 0

54 Harris Detector The Algorithm: Find points with large corner response function R (R > threshold) Take the points of local maxima of R

55 Darya Frolova, Denis Simakov The Weizmann Institute of Science Harris corner detector algorithm Compute image gradients Ix Iy for all pixels For each pixel Compute by looping over neighbors x,y compute Find points with large corner response function R (R > threshold) Take the points of locally maximum R as the 57 detected feature points (ie, pixels where R is bigger than for all the 4 or 8 neighbors). 57

56 Harris Detector - Example

57 Harris Detector - Example

58 Harris Detector - Example Compute corner response R

59 Harris Detector - Example Find points with large corner response: R>threshold

60 Harris Detector - Example Take only the points of local maxima of R

61 Harris Detector - Example Map corners on the original image

62 Harris Detector Scale Parameters The Harris detector requires two scale parameters: (i) a differentiation scale σ D for smoothing prior to the computation of image derivatives, & (ii) an integration scale σ I for defining the size of the Gaussian window (i.e., integrating derivative responses). A W (x,y) A W (x,y,σ I,σ D ) Typically, σ I =γσ D

63 Invariance to Geometric/Photometric Changes Is the Harris detector invariant to geometric and photometric changes? Geometric Rotation Scale Affine Photometric Affine intensity change: I(x,y) a I(x,y) + b

64 Harris Detector: Rotation Invariance Rotation Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation

65 Harris Detector: Rotation Invariance (cont d)

66 Harris Detector: Photometric Changes Affine intensity change Only derivatives are used => invariance to intensity shift I(x,y) I (x,y) + b Intensity scale: I(x,y) a I(x,y) R R threshold x (image coordinate) x (image coordinate) Partially invariant to affine intensity change

67 Harris Detector: Scale Invariance Scaling Corner All points will be classified as edges Not invariant to scaling (and affine transforms)

68 Harris Detector: Disadvantages Sensitive to: Scale change Significant viewpoint change Significant contrast change

69 How to handle scale changes? A W must be adapted to scale changes. If the scale change is known, we can adapt the Harris detector to the scale change (i.e., set properly σ I, σ D ). What if the scale change is unknown?

70 Multi-scale Harris Detector Detects interest points at varying scales. R(A W ) = det(a W (x,y,σ I,σ D )) α trace 2 (A W (x,y,σ I,σ D )) scale σ n =k n σ σ D = σ n σ I =γσ D σ n y Harris x

71 How to cope with transformations? Exhaustive search Invariance Robustness

72 Multi-scale approach Exhaustive search Slide from T. Tuytelaars ECCV 2006 tutorial

73 Multi-scale approach Exhaustive search

74 Multi-scale approach Exhaustive search

75 Multi-scale approach Exhaustive search

76 How to handle scale changes? Not a good idea! There will be many points representing the same structure, complicating matching! Note that point locations shift as scale increases. The size of the circle corresponds to the scale at which the point was detected

77 How to handle scale changes? (cont d) How do we choose corresponding circles independently in each image?

78 How to handle scale changes? (cont d) Alternatively, use scale selection to find the characteristic scale of each feature. Characteristic scale depends on the feature s spatial extent (i.e., local neighborhood of pixels). scale selection scale selection

79 How to handle scale changes? Only a subset of the points computed in scale space are selected! The size of the circles corresponds to the scale at which the point was selected.

80 Automatic Scale Selection Design a function F(x,σ n ) which provides some local measure. Select points at which F(x,σ n ) is maximal over σ n. F(x,σ n ) max of F(x,σ n ) corresponds to characteristic scale! σ n T. Lindeberg, "Feature detection with automatic scale selection" International Journal of Computer Vision, vol. 30, no. 2, pp , 1998.

81 Invariance Extract patch from each image individually

82 Automatic scale selection Solution: Design a function on the region, which is scale invariant (the same for corresponding regions, even if they are at different scales) Example: average intensity. For corresponding regions (even of different sizes) it will be the same. For a point in one image, we can consider it as a function of region size (patch width) f Image 1 f Image 2 scale = 1/2 region size region size

83 Automatic scale selection Common approach: Take a local maximum of this function Observation: region size, for which the maximum is achieved, should be invariant to image scale. Important: this scale invariant region size is found in each image independently! f Image 1 f Image 2 scale = 1/2 s 1 region size s 2 region size

84 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x σ i i m f ( I 1 ( x, σ )) m 88

85 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x σ i i m f ( I 1 ( x, σ )) m 89

86 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x σ i i m f ( I 1 ( x, σ )) m 90

87 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x σ i i m f ( I 1 ( x, σ )) m 91

88 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x σ i i m f ( I 1 ( x, σ )) m 92

89 Automatic Scale Selection Function responses for increasing scale (scale signature) f I (, )) i i i i m f ( I 1 ( x, σ )) m ( 1 x σ 93

90 Scale selection Use the scale determined by detector to compute descriptor in a normalized frame

91 Automatic Scale Selection (cont d) Characteristic scale is relatively independent of the image scale. Scale factor: 2.5 The ratio of the scale values corresponding to the max values, is equal to the scale factor between the images. Scale selection allows for finding spatial extend that is covariant with the image transformation.

92 Automatic Scale Selection What local measures should we use? Should be rotation invariant Should have one stable sharp peak

93 How should we choose F(x,σ n )? Typically, F(x,σ n ) is defined using derivatives, e.g.: Square gradient L x L x LoG L x L x : σ ( x(, σ) + y(, σ)) 2 : σ ( xx (, σ) + yy (, σ)) DoG: I( x)* G( σ ) I( x)* G( σ ) n 1 Harris function A trace A 2 :det( W) α ( W) LoG yielded best results in a evaluation study; DoG was second best. C. Schmid, R. Mohr, and C. Bauckhage, "Evaluation of Interest Point Detectors", International Journal of Computer Vision, 37(2), pp , n

94 How should we choose F(x,σ n )? Let s see how LoG responds at blobs

95 Recall: Edge detection Using 1 st derivative f Edge d dx g Derivative of Gaussian f d dx g Edge = maximum of derivative

96 Recall: Edge detection Using 2 nd derivative f Edge d dx 2 d dx 2 2 f 2 g g Second derivative of Gaussian (Laplacian) Edge = zero crossing of second derivative

97 From edges to blobs (i.e., small regions) Blob = superposition of two edges (blobs of different spatial extent) maximum Spatial selection: the magnitude of the Laplacian response will achieve a maximum (absolute value) at the center of he blob, provided the scale of the Laplacian is matched to the scale of the blob (e.g, spatial extent)

98 How could we find the spatial extent of a blob using LoG? Idea: Find the characteristic scale of the blob by convolving it with Laplacian filters at several scales and looking for the maximum response. However, Laplacian response decays as scale increases: original signal (radius=8) increasing σ Why does this happen?

99 Scale normalization The response of a derivative of Gaussian filter to a perfect step edge decreases as σ increases 1 σ 2π

100 Scale normalization (cont d) To keep response the same (scaleinvariant), must multiply Gaussian derivative by σ Laplacian is the second Gaussian derivative, so it must be multiplied by σ 2

101 Effect of scale normalization Original signal Unnormalized Laplacian response Scale-normalized Laplacian response ( x σ 2 ) maximum

102 What Is A Useful Signature Function? Laplacian-of-Gaussian = blob detector 106

103 Scale selection: case of circle At what scale does the Laplacian achieve a maximum response for a binary circle of radius r? r image Laplacian

104 Scale selection: case of circle At what scale does the LoG achieve a maximum response for a binary circle of radius r? σ LoG is maximized at = r / 2 (characteristic scale) r LoG response image r / 2 scale (σ)

105 Characteristic scale We define the characteristic scale as the scale that produces peak of Laplacian response characteristic scale σ = r / 2 T. Lindeberg (1998). "Feature detection with automatic scale selection." International Journal of Computer Vision 30 (2): pp Source: Lana Lazebnik

106 Scale-space blob detector: Example Source: Lana Lazebnik

107 Scale-space blob detector: Example Source: Lana Lazebnik

108 Scale-space blob detector: Example Source: Lana Lazebnik

109 Harris-Laplace Detector Multi-scale Harris with scale selection. Uses LoG maxima to find characteristic scale. scale σ n y LoG Harris x

110 Harris-Laplace Detector (cont d) (1) Compute interest points at multiple scales using the Harris detector. - Scales are chosen as follows: σ n =k n σ 0 (σ I =σ n, σ D =cσ n ) - At each scale, choose local maxima assuming 3 x 3 window σ n σ n σ n where σ n = K. Mikolajczyk and C. Schmid (2001). Indexing based on scale invariant interest points" Int Conference on Computer Vision, pp

111 Harris-Laplace Detector (cont d) (2) Select points at which a local measure (i.e., normalized Laplacian) is maximal over scales. σ n σ n-1 σ n σ n+1 where: σ n = (s= σ n ) K. Mikolajczyk and C. Schmid, Indexing based on scale invariant interest points" Int Conference on Computer Vision, pp , 2001.

112 Example Points detected at each scale using Harris-Laplace (images differ by a scale factor 1.92) Few points detected in the same location but on different scales. Many correspondences between levels for which scale ratio corresponds to the real scale change between images.

113 Example (same viewpoint change in focal length and orientation) 190 and 213 points detected in the left and right images, respectively (more than 2000 points would have been detected without scale selection)

114 Example (cont d) 58 points are initially matched (some not correct)

115 Example (cont d) Reject inconsistent matches using RANSAC to compute the homography between images left with 32 matches, all of which are correct. The estimated scale factor is 4:9 and the estimated rotation angle is 19 degrees.

116 Harris-Laplace Detector (cont d) Invariant to: Scale Rotation Translation Robust to: Illumination changes Limited viewpoint changes Repeatability

117 Efficient implementation using DoG LoG can be approximated by DoG: 2 2 Gxyk (,, σ) Gxy (,, σ) ( k 1) σ G Note that DoG already incorporates the σ 2 scale normalization.

118 Efficient implementation using DoG (cont d) Gaussian-blurred image The result of convolving an image with a difference-of Gaussian is given by: ),, ( ),, ( ), ( )),, ( ),, ( ( ),, ( σ σ σ σ σ y x L k y x L y x I y x G k y x G y x D = = (,, ) (,, ) (, ) Lxy Gxy Ixy σ σ =

119 Example = L( x, y, kσ ) L( x, y, σ ) - =

120 Efficient implementation using DoG (cont d) 2 ( σ ) G xyk,, * I ( σ ) G xyk,, * I ( σ ) Gxy,, * I (,, σ ) = ( σ) ( σ) Dxy ( ) Gxyk,, Gxy,, * I σ n =k n σ DoG David G. Lowe. "Distinctive image features from scale-invariant keypoints. IJCV 60 (2), pp , 2004.

121 Efficient implementation using DoG (cont d) DoG pyramid Look for local maxima in DoG pyramid

122 Resample Blur Subtract Key point localization with DoG Detect maxima of differenceof-gaussian (DoG) in scale space Then reject points with low contrast (threshold) Eliminate edge responses Candidate keypoints: list of (x,y,σ)

123 Example of keypoint detection (a) 233x189 image (b) 832 DOG extrema (c) 729 left after peak value threshold (d) 536 left after testing ratio of principle curvatures (removing edge responses)

124 Review Homography and Ransac ReProjection method from an image to another Robust fitting of the Mapping #128 Interest Points Corner Detection: Harris Detector Scaling Issues Harris-Laplace Detector

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Blobs & Scale Invariance

Blobs & Scale Invariance Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Image Analysis. Feature extraction: corners and blobs

Image Analysis. Feature extraction: corners and blobs Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al. 6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;

More information

Detectors part II Descriptors

Detectors part II Descriptors EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:

More information

Blob Detection CSC 767

Blob Detection CSC 767 Blob Detection CSC 767 Blob detection Slides: S. Lazebnik Feature detection with scale selection We want to extract features with characteristic scale that is covariant with the image transformation Blob

More information

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Recap: edge detection. Source: D. Lowe, L. Fei-Fei Recap: edge detection Source: D. Lowe, L. Fei-Fei Canny edge detector 1. Filter image with x, y derivatives of Gaussian 2. Find magnitude and orientation of gradient 3. Non-maximum suppression: Thin multi-pixel

More information

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004. SIFT keypoint detection D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (), pp. 91-110, 004. Keypoint detection with scale selection We want to extract keypoints with characteristic

More information

INTEREST POINTS AT DIFFERENT SCALES

INTEREST POINTS AT DIFFERENT SCALES INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions

More information

CSE 473/573 Computer Vision and Image Processing (CVIP)

CSE 473/573 Computer Vision and Image Processing (CVIP) CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for

More information

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching Invariant local eatures Invariant Local Features Tuesday, February 6 Subset o local eature types designed to be invariant to Scale Translation Rotation Aine transormations Illumination 1) Detect distinctive

More information

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features? Lecture 1 Why extract eatures? Motivation: panorama stitching We have two images how do we combine them? Local Feature Detection Guest lecturer: Alex Berg Reading: Harris and Stephens David Lowe IJCV We

More information

Extract useful building blocks: blobs. the same image like for the corners

Extract useful building blocks: blobs. the same image like for the corners Extract useful building blocks: blobs the same image like for the corners Here were the corners... Blob detection in 2D Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D 2 g=

More information

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 5: Feature descriptors and matching Szeliski: 4.1 Reading Announcements Project 1 Artifacts due tomorrow, Friday 2/17, at 11:59pm Project 2 will be released

More information

Local Features (contd.)

Local Features (contd.) Motivation Local Features (contd.) Readings: Mikolajczyk and Schmid; F&P Ch 10 Feature points are used also or: Image alignment (homography, undamental matrix) 3D reconstruction Motion tracking Object

More information

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different detectors Region descriptors and their performance

More information

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D Achieving scale covariance Blobs (and scale selection) Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region

More information

Achieving scale covariance

Achieving scale covariance Achieving scale covariance Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant

More information

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015 CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature

More information

Corner detection: the basic idea

Corner detection: the basic idea Corner detection: the basic idea At a corner, shifting a window in any direction should give a large change in intensity flat region: no change in all directions edge : no change along the edge direction

More information

Image matching. by Diva Sian. by swashford

Image matching. by Diva Sian. by swashford Image matching by Diva Sian by swashford Harder case by Diva Sian by scgbt Invariant local features Find features that are invariant to transformations geometric invariance: translation, rotation, scale

More information

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba   WWW: SURF Features Jacky Baltes Dept. of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky Salient Spatial Features Trying to find interest points Points

More information

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC Kepoint etraction: Corners 9300 Harris Corners Pkw Charlotte NC Wh etract kepoints? Motivation: panorama stitching We have two images how do we combine them? Wh etract kepoints? Motivation: panorama stitching

More information

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2) Lecture 7: Finding Features (part 2/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 7 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon

More information

Lecture 6: Finding Features (part 1/2)

Lecture 6: Finding Features (part 1/2) Lecture 6: Finding Features (part 1/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 6 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon

More information

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2) Lecture 7: Finding Features (part 2/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? Local invariant features MoPvaPon Requirements, invariances

More information

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Silvio Savarese Lecture 10-16-Feb-15 From the 3D to 2D & vice versa

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE Overview Motivation of Work Overview of Algorithm Scale Space and Difference of Gaussian Keypoint Localization Orientation Assignment Descriptor Building

More information

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to

More information

Scale & Affine Invariant Interest Point Detectors

Scale & Affine Invariant Interest Point Detectors Scale & Affine Invariant Interest Point Detectors KRYSTIAN MIKOLAJCZYK AND CORDELIA SCHMID [2004] Shreyas Saxena Gurkirit Singh 23/11/2012 Introduction We are interested in finding interest points. What

More information

Harris Corner Detector

Harris Corner Detector Multimedia Computing: Algorithms, Systems, and Applications: Feature Extraction By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 SIFT: Scale Invariant Feature Transform With slides from Sebastian Thrun Stanford CS223B Computer Vision, Winter 2006 3 Pattern Recognition Want to find in here SIFT Invariances: Scaling Rotation Illumination

More information

Scale-space image processing

Scale-space image processing Scale-space image processing Corresponding image features can appear at different scales Like shift-invariance, scale-invariance of image processing algorithms is often desirable. Scale-space representation

More information

Lecture 7: Edge Detection

Lecture 7: Edge Detection #1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection

More information

Scale & Affine Invariant Interest Point Detectors

Scale & Affine Invariant Interest Point Detectors Scale & Affine Invariant Interest Point Detectors Krystian Mikolajczyk and Cordelia Schmid Presented by Hunter Brown & Gaurav Pandey, February 19, 2009 Roadmap: Motivation Scale Invariant Detector Affine

More information

Instance-level l recognition. Cordelia Schmid INRIA

Instance-level l recognition. Cordelia Schmid INRIA nstance-level l recognition Cordelia Schmid NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these landmars...in

More information

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching Advanced Features Jana Kosecka Slides from: S. Thurn, D. Lowe, Forsyth and Ponce Advanced Features: Topics Advanced features and feature matching Template matching SIFT features Haar features 2 1 Features

More information

EE 6882 Visual Search Engine

EE 6882 Visual Search Engine EE 6882 Visual Search Engine Prof. Shih Fu Chang, Feb. 13 th 2012 Lecture #4 Local Feature Matching Bag of Word image representation: coding and pooling (Many slides from A. Efors, W. Freeman, C. Kambhamettu,

More information

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008 Lecture 6: Edge Detection CAP 5415: Computer Vision Fall 2008 Announcements PS 2 is available Please read it by Thursday During Thursday lecture, I will be going over it in some detail Monday - Computer

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

Wavelet-based Salient Points with Scale Information for Classification

Wavelet-based Salient Points with Scale Information for Classification Wavelet-based Salient Points with Scale Information for Classification Alexandra Teynor and Hans Burkhardt Department of Computer Science, Albert-Ludwigs-Universität Freiburg, Germany {teynor, Hans.Burkhardt}@informatik.uni-freiburg.de

More information

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and

More information

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica SIFT, GLOH, SURF descriptors Dipartimento di Sistemi e Informatica Invariant local descriptor: Useful for Object RecogniAon and Tracking. Robot LocalizaAon and Mapping. Image RegistraAon and SAtching.

More information

Filtering and Edge Detection

Filtering and Edge Detection Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity

More information

Maximally Stable Local Description for Scale Selection

Maximally Stable Local Description for Scale Selection Maximally Stable Local Description for Scale Selection Gyuri Dorkó and Cordelia Schmid INRIA Rhône-Alpes, 655 Avenue de l Europe, 38334 Montbonnot, France {gyuri.dorko,cordelia.schmid}@inrialpes.fr Abstract.

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Featre etraction: Corners and blobs Wh etract featres? Motiation: panorama stitching We hae two images how do we combine them? Wh etract featres? Motiation: panorama stitching We hae two images how do

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest t points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil nfrastructure Eng. & Mgmt. Session 9- mage Detectors, Part Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering Lab e-mail:

More information

arxiv: v1 [cs.cv] 10 Feb 2016

arxiv: v1 [cs.cv] 10 Feb 2016 GABOR WAVELETS IN IMAGE PROCESSING David Bařina Doctoral Degree Programme (2), FIT BUT E-mail: xbarin2@stud.fit.vutbr.cz Supervised by: Pavel Zemčík E-mail: zemcik@fit.vutbr.cz arxiv:162.338v1 [cs.cv]

More information

Image Processing 1 (IP1) Bildverarbeitung 1

Image Processing 1 (IP1) Bildverarbeitung 1 MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV KOGS Image Processing 1 IP1 Bildverarbeitung 1 Lecture : Object Recognition Winter Semester 015/16 Slides: Prof. Bernd Neumann Slightly revised

More information

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline EECS150 - Digital Design Lecture 15 SIFT2 + FSM Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Feature detection.

Feature detection. Feature detection Kim Steenstrup Pedersen kimstp@itu.dk The IT University of Copenhagen Feature detection, The IT University of Copenhagen p.1/20 What is a feature? Features can be thought of as symbolic

More information

Optical Flow, Motion Segmentation, Feature Tracking

Optical Flow, Motion Segmentation, Feature Tracking BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature

More information

The state of the art and beyond

The state of the art and beyond Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching

More information

Machine vision. Summary # 4. The mask for Laplacian is given

Machine vision. Summary # 4. The mask for Laplacian is given 1 Machine vision Summary # 4 The mask for Laplacian is given L = 0 1 0 1 4 1 (6) 0 1 0 Another Laplacian mask that gives more importance to the center element is L = 1 1 1 1 8 1 (7) 1 1 1 Note that the

More information

Given a feature in I 1, how to find the best match in I 2?

Given a feature in I 1, how to find the best match in I 2? Feature Matching 1 Feature matching Given a feature in I 1, how to find the best match in I 2? 1. Define distance function that compares two descriptors 2. Test all the features in I 2, find the one with

More information

Image Stitching II. Linda Shapiro CSE 455

Image Stitching II. Linda Shapiro CSE 455 Image Stitching II Linda Shapiro CSE 455 RANSAC for Homography Initial Matched Points RANSAC for Homography Final Matched Points RANSAC for Homography Image Blending What s wrong? Feathering + 1 0 1 0

More information

Templates, Image Pyramids, and Filter Banks

Templates, Image Pyramids, and Filter Banks Templates, Image Pyramids, and Filter Banks 09/9/ Computer Vision James Hays, Brown Slides: Hoiem and others Review. Match the spatial domain image to the Fourier magnitude image 2 3 4 5 B A C D E Slide:

More information

Lecture 05 Point Feature Detection and Matching

Lecture 05 Point Feature Detection and Matching nstitute of nformatics nstitute of Neuroinformatics Lecture 05 Point Feature Detection and Matching Davide Scaramuzza 1 Lab Eercise 3 - Toda afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Wor description:

More information

Machine vision, spring 2018 Summary 4

Machine vision, spring 2018 Summary 4 Machine vision Summary # 4 The mask for Laplacian is given L = 4 (6) Another Laplacian mask that gives more importance to the center element is given by L = 8 (7) Note that the sum of the elements in the

More information

Image Stitching II. Linda Shapiro EE/CSE 576

Image Stitching II. Linda Shapiro EE/CSE 576 Image Stitching II Linda Shapiro EE/CSE 576 RANSAC for Homography Initial Matched Points RANSAC for Homography Final Matched Points RANSAC for Homography Image Blending What s wrong? Feathering + 1 0 ramp

More information

Affine Adaptation of Local Image Features Using the Hessian Matrix

Affine Adaptation of Local Image Features Using the Hessian Matrix 29 Advanced Video and Signal Based Surveillance Affine Adaptation of Local Image Features Using the Hessian Matrix Ruan Lakemond, Clinton Fookes, Sridha Sridharan Image and Video Research Laboratory Queensland

More information

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator Interest Operators All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator SIFT interest point detector and region descriptor Kadir Entrop Detector

More information

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005 6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose

More information

Visual Object Recognition

Visual Object Recognition Visual Object Recognition Lecture 2: Image Formation Per-Erik Forssén, docent Computer Vision Laboratory Department of Electrical Engineering Linköping University Lecture 2: Image Formation Pin-hole, and

More information

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA nstance-level l recognition Cordelia Schmid & Josef Sivic NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these

More information

Edge Detection. CS 650: Computer Vision

Edge Detection. CS 650: Computer Vision CS 650: Computer Vision Edges and Gradients Edge: local indication of an object transition Edge detection: local operators that find edges (usually involves convolution) Local intensity transitions are

More information

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION Peter M. Roth and Martin Winter Inst. for Computer Graphics and Vision Graz University of Technology, Austria Technical Report ICG TR 01/08 Graz,

More information

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.

More information

Feature Tracking. 2/27/12 ECEn 631

Feature Tracking. 2/27/12 ECEn 631 Corner Extraction Feature Tracking Mostly for multi-frame applications Object Tracking Motion detection Image matching Image mosaicing 3D modeling Object recognition Homography estimation... Global Features

More information

Perception III: Filtering, Edges, and Point-features

Perception III: Filtering, Edges, and Point-features Perception : Filtering, Edges, and Point-features Davide Scaramuzza Universit of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Toda s outline mage filtering Smoothing Edge detection

More information

Rotational Invariants for Wide-baseline Stereo

Rotational Invariants for Wide-baseline Stereo Rotational Invariants for Wide-baseline Stereo Jiří Matas, Petr Bílek, Ondřej Chum Centre for Machine Perception Czech Technical University, Department of Cybernetics Karlovo namesti 13, Prague, Czech

More information

Computer Vision Lecture 3

Computer Vision Lecture 3 Computer Vision Lecture 3 Linear Filters 03.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Demo Haribo Classification Code available on the class website...

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

Hilbert-Huang Transform-based Local Regions Descriptors

Hilbert-Huang Transform-based Local Regions Descriptors Hilbert-Huang Transform-based Local Regions Descriptors Dongfeng Han, Wenhui Li, Wu Guo Computer Science and Technology, Key Laboratory of Symbol Computation and Knowledge Engineering of the Ministry of

More information

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news Edge Detection Introduction to Computer Vision CS / ECE 8B Thursday, April, 004 Edge detection (HO #5) Edge detection is a local area operator that seeks to find significant, meaningful changes in image

More information

Orientation Map Based Palmprint Recognition

Orientation Map Based Palmprint Recognition Orientation Map Based Palmprint Recognition (BM) 45 Orientation Map Based Palmprint Recognition B. H. Shekar, N. Harivinod bhshekar@gmail.com, harivinodn@gmail.com India, Mangalore University, Department

More information

Multimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis

Multimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis Previous Lecture Multimedia Databases Texture-Based Image Retrieval Low Level Features Tamura Measure, Random Field Model High-Level Features Fourier-Transform, Wavelets Wolf-Tilo Balke Silviu Homoceanu

More information

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications Edge Detection Roadmap Introduction to image analysis (computer vision) Its connection with psychology and neuroscience Why is image analysis difficult? Theory of edge detection Gradient operator Advanced

More information

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig Multimedia Databases Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Previous Lecture Texture-Based Image Retrieval Low

More information

Multimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis

Multimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis 4 Shape-based Features Multimedia Databases Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Multiresolution Analysis

More information

Object Recognition Using Local Characterisation and Zernike Moments

Object Recognition Using Local Characterisation and Zernike Moments Object Recognition Using Local Characterisation and Zernike Moments A. Choksuriwong, H. Laurent, C. Rosenberger, and C. Maaoui Laboratoire Vision et Robotique - UPRES EA 2078, ENSI de Bourges - Université

More information

Feature Extraction and Image Processing

Feature Extraction and Image Processing Feature Extraction and Image Processing Second edition Mark S. Nixon Alberto S. Aguado :*авш JBK IIP AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO

More information

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Technischen Universität München Winter Semester 0/0 TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Slobodan Ilić Overview Image formation Convolution Non-liner filtering: Median

More information

Feature Vector Similarity Based on Local Structure

Feature Vector Similarity Based on Local Structure Feature Vector Similarity Based on Local Structure Evgeniya Balmachnova, Luc Florack, and Bart ter Haar Romeny Eindhoven University of Technology, P.O. Box 53, 5600 MB Eindhoven, The Netherlands {E.Balmachnova,L.M.J.Florack,B.M.terHaarRomeny}@tue.nl

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017 EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:

More information

Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition

Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition Esa Rahtu Mikko Salo Janne Heikkilä Department of Electrical and Information Engineering P.O. Box 4500, 90014 University of

More information

I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 4. Filtering

I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 4. Filtering I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University Computer Vision: 4. Filtering Outline Impulse response and convolution. Linear filter and image pyramid. Textbook: David A. Forsyth

More information

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2 Corner Detection 2D Image Features Corners are important two dimensional features. Two dimensional image features are interesting local structures. They include junctions of dierent types Slide 3 They

More information

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Gradients and edges Points of sharp change in an image are interesting: change in reflectance change in object change

More information