Image Analysis. Feature extraction: corners and blobs

Size: px
Start display at page:

Download "Image Analysis. Feature extraction: corners and blobs"

Transcription

1 Image Analysis Feature extraction: corners and blobs Christophoros Nikou Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill ( D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 003. M. Nixon and A. Aguado. Feature Extraction and Image Processing. Academic Press 010. University of Ioannina - Department of Computer Science Corners and Blobs 1

2 3 Motivation: Panorama Stitching 4 Motivation: Panorama Stitching (cont.) Step 1: feature extraction Step : feature matching

3 5 Motivation: Panorama Stitching (cont.) Step 1: feature extraction Step : feature matching Step 3: image alignment 6 Characteristics of good features Repeatability The same feature can be found in several images despite geometric and photometric transformations. Saliency Each feature has a distinctive description. Compactness and efficiency Many fewer features than image pixels. Locality A feature occupies a relatively small area of the image; robust to clutter and occlusion. 3

4 7 Applications Feature points are used for: Motion tracking. Image alignment. 3D reconstruction. Object recognition. Image indexing and retrieval. Robot navigation. 8 Image Curvature Extends the notion of edges. Rate of change in edge direction. Points where the edge direction changes rapidly are characterized as corners. We need some elements from differential geometry. Parametric form of a planar curve: Ct () = [ xt (), y ()] t Describes the points in a continuous curve as the endpoints of a position vector. 4

5 9 Image Curvature (cont.) The tangent vector describes changes in the position vector: dx dy Tt () = Ct () = [ xt (), yt ()] =, dt dt Intuitive meaning think of the trace of the curve as the motion of a point at time t. The tangent vector describes the instantaneous motion. 10 Image Curvature (cont.) At any time moment, the point moves with velocity magnitude: Ct x t y t () = () + () in the direction: 1 yt () ϕ() t = tan x () t 5

6 11 Image Curvature (cont.) The curvature at a point C(t) describes the changes in the direction of the tangent with respect to changes in arc length (constant displacements along the curve): κ () t = dϕ() t ds The curvature is given with respect to the arc length because a curve parameterized by the arc length maintains a constant velocity. 1 Image Curvature (cont.) Gradient direction N(t) φ(t) Edge curve C(t) ( ) Tangent T(t) dϕ() t κ () t = ds 6

7 13 Image Curvature (cont.) Parameterization of a planar curve by the arc length is the length of the curve from 0 to t: t dc() t s() t = dt 0 dt This implies: ds() t d dc() t dc() t t = dt x t y t dt dt = = + 0 dt dt () () 14 Image Curvature (cont.) Parameterization by the arc length is not unique but it has the property: dc() t dc() t dc() t dc() t dc() t dt dt dt = = dt = = = 1 ds dt ds ds ds dc() t dt dt dt 7

8 15 Image Curvature (cont.) Back to the definition of curvature: dϕ() t dϕ() t dt κ () t = = ds dt ds d 1 y 1 = tan dt x x () t + y () t x () t y() t y () t x() t κ() t = 3/ x () t + y () t 16 Image Curvature (cont.) Some useful relations from differential geometry of planar curves (easily deduced) are: dc() t ds = Tt () dc () t ds = κ () tnt () The curvature is the magnitude of the second derivative of the curve with respect to the arc length: κ () t = dc () t ds 8

9 17 Image Curvature (cont.) It is also useful to express the normal vector at a point to a curve in Cartesian coordinates: yt () xt () Nt () =, Tt () = [ xt (), yt ()] x () t + y () t x () t + y () t 18 Image Curvature (cont.) There are three main approaches to compute curvature Direct computation on an edge image. Derive the measure of curvature from image intensities. Correlation. x () t y() t y () t x() t κ() t = 3/ x () t + y () t 9

10 19 Image Curvature (cont.) Direct computation from edges. Difference in edge direction. κ() t = ϕ( t+ 1) ϕ( t 1) with 1 yt ( + 1) yt ( 1) ϕ() t = tan xt ( + 1) xt ( 1) Connected edges are needed (hysteresis). 0 Image Curvature (cont.) Direct computation from edges. Smoothing is generally required by considering more than two pixels: n 1 κ() t = ϕ( t+ 1 + i) ϕ( t+ i 1) n i= 1 Not very reliable results. Reformulation of a first order edge detection scheme. Quantization errors in angle measurement. Threshold to detect corners. 10

11 1 Image Curvature (cont.) Direct computation from edges. Object silhouette Thresholded curvature The result depends strongly on the threshold. Image Curvature (cont.) Computation from image intensity. It should be computed along the curve (normal to the image gradient) for each pixel. Cartesian coordinates for the angle of the tangent φ(x,y). Measure of angular changes in the image with respect to location. The curve at an image point may be approximated by: ( ϕ ) ( ϕ ) x() t = x( t 1) + tcos ( x, y) yt () = yt ( 1) + tsin ( xy, ) 11

12 3 Image Curvature (cont.) Computation from image intensity. with The curvature is given by: ϕ( x, y) ϕ( x, y) x( t) ϕ( x, y) y( t) κϕ ( xy, ) = = + t x t y t xt () yt () = cos ( ϕ( x, y) ), = sin ( ϕ( x, y) ), t t 1 M x ϕ( xy, ) = tan Normal to the curve. Recall M y that this is associated with the C ss (t). 4 Image Curvature (cont.) Computation from image intensity. κ ϕ Substituting the all the terms: 1 M M y M x y M x ( xy, ) = M 3/ y MM x y + Mx MM x y x x y y ( Mx + M y) Alternatively, by differentiating backwards: 1 M M y M x y M x κ ϕ ( xy, ) = M 3/ y MM x y Mx + MM x y x x y y ( Mx + M y) 1

13 5 Image Curvature (cont.) Computation from image intensity. Other measures differentiate along the normal to the curve. The idea is that curves may be thicker than one pixel wide. Differentiating along the normal measures the difference between internal and external gradient angles. Theoretically, these are equal. However, in practice they differ due to image discretization. The more the edge is bent, the larger the difference (Kass et al. IJCV 1988). 6 Image Curvature (cont.) Computation from image intensity. The measures are: 1 M y M y M y M x κ ϕ ( xy, ) = M 3/ x MM x y MM x y + MM x y x x y y ( Mx + M y) 1 M y M M x y M x κ ϕ ( xy, ) = M 3/ x + MM x y MM x y + M y x x y y ( Mx + M y) 13

14 7 Image Curvature (cont.) Computation from image intensity. κϕ κ ϕ κ ϕ κ ϕ Better than direct computation but the results are not consistent. 8 Finding Corners Image Curvature by Correlation Key property: in the region around a corner, image gradient has two or more dominant directions. Corners are repeatable and distinctive. C.Harris and M.Stephens. "A Combined Corner and Edge Detector. Proceedings of the 4th Alvey Vision Conference: pages

15 9 The Basic Idea We should easily recognize the point by looking through a small window. Shifting a window in any direction should give a large change in intensity. flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions Source: A. Efros 30 Harris Corner Detector Change in appearance for the shift [u,v]: [ + + ] E ( uv, ) = wxy (, ) I ( x+ u, y+ v ) I ( xy, ) xy, Window function Shifted intensity Intensity Window function w(x,y) or 1 in window, 0 outside Gaussian Source: R. Szeliski 15

16 31 Harris Detector (cont.) Change in appearance for the displacement [u,v]: xy, [ ] E( uv, ) = wxy (, ) I( x+ u, y + v) I( x, y) Second-order Taylor expansion of E(u,v) around (0,0): E( u, v) E(0,0) + [ u Eu (0,0) 0) 0) Euu (0,0) Euv(0,0) u v] [ u v] Ev (0,0) Euv(0,0) Evv(0,0) v 3 Harris Detector (cont.) [ ] E( uv, ) = wxy (, ) I( x+ u, y + v) I( x, y) xy, As E(0,0)= 0, the higher order terms yield: E I( x+ u, y+ v) Eu = = wxy (, )[ Ix ( + uy, + v) Ixy (, )] u xy, u I( x+ u, y+ v) ( x+ u) = wxy (, )[ I( x+ uy, + v) I( xy, )] = ( x + u ) u x, y xy, [ ] = wxy (, ) I( x+ uy, + v) I( xy, ) I( x+ uy, + v) At the origin: E u (0,0) = 0 x 16

17 33 Harris Detector (cont.) E Eu = = wxy (, )[ Ix ( + uy, + v) Ixy (, )] Ix( x+ uy, + v) u xy, y The second order term is: E E Euu = = u u u = wxyi (, ) ( x+ uy, + vi ) ( x+ uy, + v) x xy, x [ ] + I ( x+ u, y+ v) w( x, y) I( x+ u, y+ v) I( x, y) xx At the origin: E w x y I x y uu (0,0) = (, ) x (, ) 34 Harris Detector (cont.) By the same reasoning we obtain the rest of the derivatives: E = w x y I x y uu (0,0) 0) (, ) x (, ) E w x y I x y vv (0,0) = (, ) y (, ) E (0,0) = E (0,0) = w( x, y) I ( x, y) I ( x, y) uv vu x y And the final expression becomes: 1 Euu (0,0) 0) Euv (0,0) 0) u Euv (, ) [ u v] Euv (0,0) Evv (0,0) v Ix IxI y u Euv (, ) [ u v] wxy (, ) xy, II x y I y v 17

18 35 Harris Detector (cont.) The bilinear approximation simplifies to u u E ( u, v) [ u v] M v where M is a matrix computed from image derivatives: M I x II x y = w( x, y) xy, II x y Iy M 36 Harris Detector (cont.) The surface E(u,v) is locally approximated by a quadratic form. Let s try to understand its shape. E ( u, v) [ u v] M u v M I x I x I = I xi y I y y 18

19 37 Harris Detector (cont.) First, consider the axis-aligned case where gradients are either horizontal or vertical. M = I x I xi y I I x y I y = λ1 0 0 λ If either λ is close to 0, then this is not a corner, so look for locations where both are large. 38 Harris Detector (cont.) Since M is symmetric, it can be written as: λ M = R R 0 λ We can visualize M as an ellipse with axes lengths determined by the eigenvalues and orientation determined by the rotation matrix R. Ellipse equation: u [ u v] M = const v direction of the fastest change (λ max ) -1/ (λ min ) -1/ direction of the slowest change 19

20 39 Visualization of second moment matrices 40 Visualization of second moment matrices (cont.) 0

21 41 Window size matters! 4 Interpreting the eigenvalues Classification of image points using eigenvalues of M : λ Edge λ >> λ 1 Corner λ 1 and λ are large, λ 1 ~ λ ; E increases in all directions λ 1 and λ are small; E is almost constant in all directions Flat region Edge λ 1 >> λ λ 1 1

22 43 Corner response function R = det( M ) M α trace( ) = λ1λ α( λ1 + λ ) α: constant (0.04 to 0.06) Edge R < 0 Corner R > 0 Flat region R small Edge R < 0 44 Harris detector: Steps 1. Compute Gaussian derivatives at each pixel.. Compute second moment matrix M in a Gaussian window around each pixel. 3. Compute corner response function R. 4. Threshold R. 5. Find local maxima of response function (nonmaximum suppression).

23 45 Harris Detector: Steps (cont.) 46 Harris Detector: Steps (cont.) Compute corner response R 3

24 47 Harris Detector: Steps (cont.) Find points with large corner response: R>threshold 48 Harris Detector: Steps (cont.) Take only the points of local maxima of R 4

25 49 Harris Detector: Steps (cont.) 50 Invariance Features should be detected despite geometric or photometric changes in the image: if we have two transformed versions of the same image, features should be detected in corresponding locations. 5

26 51 Models of Image Transformation Geometric Rotation Scale Affine valid for: orthographic hi camera, locally ll planar object Photometric Affine intensity change (I a I + b) 5 Rotation Harris Detector: Invariance Properties The ellipse rotates but its shape (i.e. eigenvalues) remains the same. Corner response R is invariant to image rotation 6

27 53 Harris Detector: Invariance Properties (cont.) Affine intensity change Only derivatives are used => invariance to intensity shift I I + b Intensity scale: I a I R threshold R x (image coordinate) x (image coordinate) Partially invariant to affine intensity change 54 Harris Detector: Invariance Properties (cont.) Scaling Corner All points will be classified as edges Not invariant to scaling 7

28 55 Harris Detector: Invariance Properties (cont.) Harris corners are not invariant to scaling. This is due to the Gaussian derivatives computed at a specific scale. If the image differs in scale the corners will be different. For scale invariance, it is necessary to detect features that can be reliably extracted under scale changes. 56 Scale-invariant feature detection Goal: independently detect corresponding regions in scaled versions of the same image. Need scale selection mechanism for finding characteristic region size that is covariant with the image transformation. Idea: Given a key point in two images determine if the surrounding neighborhoods contain the same structure up to scale. We could do this by sampling each image at a range of scales and perform comparisons at each pixel to find a match but it is impractical. 8

29 57 Scale-invariant feature detection (cont.) Evaluate a signature function and plot the result as a function of the scale. The shape should be similar in different scales. 58 Scale-invariant feature detection (cont.) The only operator fulfilling these requirements is a scale-normalized Gaussian. T. Lindeberg. Scale space theory: a basic tool for analyzing structures at different scales. Journal of Applied Statistics, 1(), pp. 4 70,

30 59 Scale-invariant feature detection (cont.) Based on the above idea, Lindeberg (1998) proposed a detector for blob-like features that searches for scale space extrema of a scalenormalized LoG. T. Lindeberg. Feature detection with automatic scale selection. International Journal of Computer Vision, 1(), pp. 4 70, Scale-invariant feature detection (cont.) 30

31 61 Recall: Edge Detection f Edge d dx g Derivative of Gaussian f d dx g Edge = maximum of derivative Source: S. Seitz 6 Recall: Edge Detection (cont.) f Edge d dx g Second derivative of Gaussian (Laplacian) d dx f g Edge = zero crossing of second derivative Source: S. Seitz 31

32 63 From edges to blobs Edge = ripple. Blob = superposition of two edges (two ripples). Spatial selection: the magnitude of the Laplacian response will achieve a maximum at the center of the blob, provided the scale of the Laplacian is matched to the scale of the blob. maximum 64 Scale selection We want to find the characteristic scale of the blob by convolving it with Laplacians at several scales and looking for the maximum response. However, Laplacian response decays as scale increases: original signal (radius=8) increasing σ Why does this happen? 3

33 65 Scale normalization The response of a derivative of Gaussian filter to a perfect step edge decreases as σ increases. 1 σ π 66 Scale normalization (cont.) The response of a derivative of Gaussian filter to a perfect step edge decreases as σ increases. To keep the response the same (scaleinvariant), we must multiply the Gaussian derivative by σ. The Laplacian is the second derivative of the Gaussian, so it must be multiplied by σ. 33

34 67 Effect of scale normalization Original signal Unnormalized Laplacian response Scale-normalized Laplacian response maximum 68 Blob detection in D Laplacian: Circularly symmetric operator for blob detection in D. g = g x + g y 34

35 69 Blob detection in D (cont.) Laplacian: Circularly symmetric operator for blob detection in D. Scale-normalized: g g = σ x g + y norm 70 Scale selection At what scale does the Laplacian achieve a maximum response for a binary circle of radius r? r image Laplacian 35

36 71 Scale selection (cont.) The D LoG is given (up to scale) by ( x + y σ ) e ( x + y )/ σ For a binary circle of radius r, the LoG achieves a maximum at σ = r / onse r Laplacian resp image r / scale (σ) 7 Characteristic scale We define the characteristic scale as the scale that produces a peak of the LoG response. characteristic scale T. Lindeberg (1998). "Feature detection with automatic scale selection." International Journal of Computer Vision, 30 (): pp

37 73 Scale-space blob detector 1. Convolve the image with scalenormalized LoG at several scales.. Find the maxima of squared LoG response in scale-space. 74 Scale-space blob detector: Example 37

38 75 Scale-space blob detector: Example 76 Scale-space blob detector: Example 38

39 77 Efficient implementation Approximating the LoG with a difference of Gaussians: ( xx(,, ) yy (,, )) L= G x y + G x y σ σ σ (Laplacian) DoG= Gxyk (,, σ ) Gxy (,, σ ) (Difference of Gaussians) We have studied this topic in edge detection. 78 Efficient implementation Divide each octave into an equal number K of intervals such that: 1/ K n k =, σ n = k σ 0 n= 1,..., K. Implementation by a Gaussian pyramid. D. G. Lowe. "Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (), pp ,

40 79 The Harris-Laplace detector It combines the Harris operator for corner-like structures with the scale space selection mechanism of DoG. Two scale spaces are built: one for the Harris corners and one for the blob detector (DoG). A key point is a Harris corner with a simultaneously maximun DoG at the same scale. It provides fewer key points with respect to DoG due to the constraint. K. Mikolajczyk and C. Schmid, Scale and Affine invariant interest point detectors, International Journal of Computer Vision, 60(1):63-86, From scale invariance to affine invariance For many problems, it is important to find features that are invariant under large viewpoint changes. The projective distortion may not be corrected locally due to the small number of pixels. A local affine approximation is usually sufficient. 40

41 81 From scale invariance to affine invariance (cont.) Affine adaptation of scale invariant detectors. Find local regions where an ellipse can be reliably and repeatedly extracted purely from local image properties. 8 Affine Adaptation Recall: M = I I I x x y λ w( x, y) = R R, y I xi y I y 0 λ x We can visualize M as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R. Ellipse equation: [ ] u u v M = const v direction of the fastest change (λ max ) -1/ (λ min ) -1/ direction of the slowest change 41

42 83 Affine adaptation example Scale-invariant regions (blobs) 84 Affine adaptation example Affine-adapted blobs 4

43 85 Affine adaptation The covarying of Harris corner detector ellipse may be viewed as the characteristic shape of a region. We can normalize the region by transforming the ellipse into a unit circle. The normalized regions may be detected under any affine transformation. 86 Affine adaptation (cont.) Problem: the second moment window determined by weights w(x,y) must match the characteristic shape of the region. Solution: iterative approach Use a circular window to compute the second moment matrix. Based on the eigenvalues, perform affine adaptation to find an ellipse-shaped window. Recompute second moment matrix in the ellipse and iterate. 43

44 87 Iterative affine adaptation K. Mikolajczyk and C. Schmid, Scale and Affine invariant interest point detectors, International Journal of Computer Vision, 60(1):63-86, Orientation ambiguity There is no unique transformation from an ellipse to a unit circle We can rotate or flip a unit circle, and it still stays a unit circle. 44

45 89 Orientation ambiguity (cont.) We have to assign a unique orientation to the keypoints in the circle: Create the histogram of local gradient directions in the patch. Assign tho the patch the orientation of the peak of the smoothed histogram. 0 π 90 Summary: Feature extraction Extract affine regions Normalize regions Eliminate rotational ambiguity Compute appearance descriptors SIFT (Lowe 04) 45

46 91 Invariance vs. covariance Invariance: features(transform(image)) = features(image) Covariance: features(transform(image)) = transform(features(image)) Covariant detection => invariant description 46

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Blob Detection CSC 767

Blob Detection CSC 767 Blob Detection CSC 767 Blob detection Slides: S. Lazebnik Feature detection with scale selection We want to extract features with characteristic scale that is covariant with the image transformation Blob

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004. SIFT keypoint detection D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (), pp. 91-110, 004. Keypoint detection with scale selection We want to extract keypoints with characteristic

More information

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Recap: edge detection. Source: D. Lowe, L. Fei-Fei Recap: edge detection Source: D. Lowe, L. Fei-Fei Canny edge detector 1. Filter image with x, y derivatives of Gaussian 2. Find magnitude and orientation of gradient 3. Non-maximum suppression: Thin multi-pixel

More information

Blobs & Scale Invariance

Blobs & Scale Invariance Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos

More information

Detectors part II Descriptors

Detectors part II Descriptors EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC Kepoint etraction: Corners 9300 Harris Corners Pkw Charlotte NC Wh etract kepoints? Motivation: panorama stitching We have two images how do we combine them? Wh etract kepoints? Motivation: panorama stitching

More information

CSE 473/573 Computer Vision and Image Processing (CVIP)

CSE 473/573 Computer Vision and Image Processing (CVIP) CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for

More information

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features? Lecture 1 Why extract eatures? Motivation: panorama stitching We have two images how do we combine them? Local Feature Detection Guest lecturer: Alex Berg Reading: Harris and Stephens David Lowe IJCV We

More information

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al. 6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;

More information

Extract useful building blocks: blobs. the same image like for the corners

Extract useful building blocks: blobs. the same image like for the corners Extract useful building blocks: blobs the same image like for the corners Here were the corners... Blob detection in 2D Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D 2 g=

More information

Local Features (contd.)

Local Features (contd.) Motivation Local Features (contd.) Readings: Mikolajczyk and Schmid; F&P Ch 10 Feature points are used also or: Image alignment (homography, undamental matrix) 3D reconstruction Motion tracking Object

More information

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D Achieving scale covariance Blobs (and scale selection) Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region

More information

Achieving scale covariance

Achieving scale covariance Achieving scale covariance Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant

More information

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching Invariant local eatures Invariant Local Features Tuesday, February 6 Subset o local eature types designed to be invariant to Scale Translation Rotation Aine transormations Illumination 1) Detect distinctive

More information

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and

More information

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different detectors Region descriptors and their performance

More information

INTEREST POINTS AT DIFFERENT SCALES

INTEREST POINTS AT DIFFERENT SCALES INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions

More information

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 5: Feature descriptors and matching Szeliski: 4.1 Reading Announcements Project 1 Artifacts due tomorrow, Friday 2/17, at 11:59pm Project 2 will be released

More information

Lecture 6: Finding Features (part 1/2)

Lecture 6: Finding Features (part 1/2) Lecture 6: Finding Features (part 1/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 6 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon

More information

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Featre etraction: Corners and blobs Wh etract featres? Motiation: panorama stitching We hae two images how do we combine them? Wh etract featres? Motiation: panorama stitching We hae two images how do

More information

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Silvio Savarese Lecture 10-16-Feb-15 From the 3D to 2D & vice versa

More information

Corner detection: the basic idea

Corner detection: the basic idea Corner detection: the basic idea At a corner, shifting a window in any direction should give a large change in intensity flat region: no change in all directions edge : no change along the edge direction

More information

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015 CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature

More information

Scale-space image processing

Scale-space image processing Scale-space image processing Corresponding image features can appear at different scales Like shift-invariance, scale-invariance of image processing algorithms is often desirable. Scale-space representation

More information

Scale & Affine Invariant Interest Point Detectors

Scale & Affine Invariant Interest Point Detectors Scale & Affine Invariant Interest Point Detectors Krystian Mikolajczyk and Cordelia Schmid Presented by Hunter Brown & Gaurav Pandey, February 19, 2009 Roadmap: Motivation Scale Invariant Detector Affine

More information

EE 6882 Visual Search Engine

EE 6882 Visual Search Engine EE 6882 Visual Search Engine Prof. Shih Fu Chang, Feb. 13 th 2012 Lecture #4 Local Feature Matching Bag of Word image representation: coding and pooling (Many slides from A. Efors, W. Freeman, C. Kambhamettu,

More information

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba   WWW: SURF Features Jacky Baltes Dept. of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky Salient Spatial Features Trying to find interest points Points

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE Overview Motivation of Work Overview of Algorithm Scale Space and Difference of Gaussian Keypoint Localization Orientation Assignment Descriptor Building

More information

Scale & Affine Invariant Interest Point Detectors

Scale & Affine Invariant Interest Point Detectors Scale & Affine Invariant Interest Point Detectors KRYSTIAN MIKOLAJCZYK AND CORDELIA SCHMID [2004] Shreyas Saxena Gurkirit Singh 23/11/2012 Introduction We are interested in finding interest points. What

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil nfrastructure Eng. & Mgmt. Session 9- mage Detectors, Part Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering Lab e-mail:

More information

Harris Corner Detector

Harris Corner Detector Multimedia Computing: Algorithms, Systems, and Applications: Feature Extraction By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 SIFT: Scale Invariant Feature Transform With slides from Sebastian Thrun Stanford CS223B Computer Vision, Winter 2006 3 Pattern Recognition Want to find in here SIFT Invariances: Scaling Rotation Illumination

More information

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2) Lecture 7: Finding Features (part 2/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 7 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon

More information

Wavelet-based Salient Points with Scale Information for Classification

Wavelet-based Salient Points with Scale Information for Classification Wavelet-based Salient Points with Scale Information for Classification Alexandra Teynor and Hans Burkhardt Department of Computer Science, Albert-Ludwigs-Universität Freiburg, Germany {teynor, Hans.Burkhardt}@informatik.uni-freiburg.de

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest t points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and

More information

Instance-level l recognition. Cordelia Schmid INRIA

Instance-level l recognition. Cordelia Schmid INRIA nstance-level l recognition Cordelia Schmid NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these landmars...in

More information

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching Advanced Features Jana Kosecka Slides from: S. Thurn, D. Lowe, Forsyth and Ponce Advanced Features: Topics Advanced features and feature matching Template matching SIFT features Haar features 2 1 Features

More information

Affine invariant Fourier descriptors

Affine invariant Fourier descriptors Affine invariant Fourier descriptors Sought: a generalization of the previously introduced similarityinvariant Fourier descriptors H. Burkhardt, Institut für Informatik, Universität Freiburg ME-II, Kap.

More information

Image Analysis. PCA and Eigenfaces

Image Analysis. PCA and Eigenfaces Image Analysis PCA and Eigenfaces Christophoros Nikou cnikou@cs.uoi.gr Images taken from: D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003. Computer Vision course by Svetlana

More information

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2) Lecture 7: Finding Features (part 2/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? Local invariant features MoPvaPon Requirements, invariances

More information

* h + = Lec 05: Interesting Points Detection. Image Analysis & Retrieval. Outline. Image Filtering. Recap of Lec 04 Image Filtering Edge Features

* h + = Lec 05: Interesting Points Detection. Image Analysis & Retrieval. Outline. Image Filtering. Recap of Lec 04 Image Filtering Edge Features age Analsis & Retrieval Outline CS/EE 5590 Special Topics (Class ds: 44873, 44874) Fall 06, M/W 4-5:5p@Bloch 00 Lec 05: nteresting Points Detection Recap of Lec 04 age Filtering Edge Features Hoework Harris

More information

Given a feature in I 1, how to find the best match in I 2?

Given a feature in I 1, how to find the best match in I 2? Feature Matching 1 Feature matching Given a feature in I 1, how to find the best match in I 2? 1. Define distance function that compares two descriptors 2. Test all the features in I 2, find the one with

More information

Maximally Stable Local Description for Scale Selection

Maximally Stable Local Description for Scale Selection Maximally Stable Local Description for Scale Selection Gyuri Dorkó and Cordelia Schmid INRIA Rhône-Alpes, 655 Avenue de l Europe, 38334 Montbonnot, France {gyuri.dorko,cordelia.schmid}@inrialpes.fr Abstract.

More information

Affine Adaptation of Local Image Features Using the Hessian Matrix

Affine Adaptation of Local Image Features Using the Hessian Matrix 29 Advanced Video and Signal Based Surveillance Affine Adaptation of Local Image Features Using the Hessian Matrix Ruan Lakemond, Clinton Fookes, Sridha Sridharan Image and Video Research Laboratory Queensland

More information

Feature detection.

Feature detection. Feature detection Kim Steenstrup Pedersen kimstp@itu.dk The IT University of Copenhagen Feature detection, The IT University of Copenhagen p.1/20 What is a feature? Features can be thought of as symbolic

More information

Lecture 7: Edge Detection

Lecture 7: Edge Detection #1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection

More information

Feature Extraction and Image Processing

Feature Extraction and Image Processing Feature Extraction and Image Processing Second edition Mark S. Nixon Alberto S. Aguado :*авш JBK IIP AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO

More information

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008 Lecture 6: Edge Detection CAP 5415: Computer Vision Fall 2008 Announcements PS 2 is available Please read it by Thursday During Thursday lecture, I will be going over it in some detail Monday - Computer

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Machine vision, spring 2018 Summary 4

Machine vision, spring 2018 Summary 4 Machine vision Summary # 4 The mask for Laplacian is given L = 4 (6) Another Laplacian mask that gives more importance to the center element is given by L = 8 (7) Note that the sum of the elements in the

More information

Computer Vision Lecture 3

Computer Vision Lecture 3 Computer Vision Lecture 3 Linear Filters 03.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Demo Haribo Classification Code available on the class website...

More information

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica SIFT, GLOH, SURF descriptors Dipartimento di Sistemi e Informatica Invariant local descriptor: Useful for Object RecogniAon and Tracking. Robot LocalizaAon and Mapping. Image RegistraAon and SAtching.

More information

Machine vision. Summary # 4. The mask for Laplacian is given

Machine vision. Summary # 4. The mask for Laplacian is given 1 Machine vision Summary # 4 The mask for Laplacian is given L = 0 1 0 1 4 1 (6) 0 1 0 Another Laplacian mask that gives more importance to the center element is L = 1 1 1 1 8 1 (7) 1 1 1 Note that the

More information

VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION. Jingyu Yan and Marc Pollefeys

VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION. Jingyu Yan and Marc Pollefeys VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION Jingyu Yan and Marc Pollefeys {yan,marc}@cs.unc.edu The University of North Carolina at Chapel Hill Department of Computer Science Chapel

More information

Edge Detection. CS 650: Computer Vision

Edge Detection. CS 650: Computer Vision CS 650: Computer Vision Edges and Gradients Edge: local indication of an object transition Edge detection: local operators that find edges (usually involves convolution) Local intensity transitions are

More information

Filtering and Edge Detection

Filtering and Edge Detection Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity

More information

Image matching. by Diva Sian. by swashford

Image matching. by Diva Sian. by swashford Image matching by Diva Sian by swashford Harder case by Diva Sian by scgbt Invariant local features Find features that are invariant to transformations geometric invariance: translation, rotation, scale

More information

Image Processing 1 (IP1) Bildverarbeitung 1

Image Processing 1 (IP1) Bildverarbeitung 1 MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV KOGS Image Processing 1 IP1 Bildverarbeitung 1 Lecture : Object Recognition Winter Semester 015/16 Slides: Prof. Bernd Neumann Slightly revised

More information

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005 6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose

More information

The state of the art and beyond

The state of the art and beyond Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching

More information

Lecture 05 Point Feature Detection and Matching

Lecture 05 Point Feature Detection and Matching nstitute of nformatics nstitute of Neuroinformatics Lecture 05 Point Feature Detection and Matching Davide Scaramuzza 1 Lab Eercise 3 - Toda afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Wor description:

More information

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION Peter M. Roth and Martin Winter Inst. for Computer Graphics and Vision Graz University of Technology, Austria Technical Report ICG TR 01/08 Graz,

More information

Feature Vector Similarity Based on Local Structure

Feature Vector Similarity Based on Local Structure Feature Vector Similarity Based on Local Structure Evgeniya Balmachnova, Luc Florack, and Bart ter Haar Romeny Eindhoven University of Technology, P.O. Box 53, 5600 MB Eindhoven, The Netherlands {E.Balmachnova,L.M.J.Florack,B.M.terHaarRomeny}@tue.nl

More information

The Calculus of Vec- tors

The Calculus of Vec- tors Physics 2460 Electricity and Magnetism I, Fall 2007, Lecture 3 1 The Calculus of Vec- Summary: tors 1. Calculus of Vectors: Limits and Derivatives 2. Parametric representation of Curves r(t) = [x(t), y(t),

More information

arxiv: v1 [cs.cv] 10 Feb 2016

arxiv: v1 [cs.cv] 10 Feb 2016 GABOR WAVELETS IN IMAGE PROCESSING David Bařina Doctoral Degree Programme (2), FIT BUT E-mail: xbarin2@stud.fit.vutbr.cz Supervised by: Pavel Zemčík E-mail: zemcik@fit.vutbr.cz arxiv:162.338v1 [cs.cv]

More information

Robert Collins CSE598G Mean-Shift Blob Tracking through Scale Space

Robert Collins CSE598G Mean-Shift Blob Tracking through Scale Space Mean-Shift Blob Tracking through Scale Space Robert Collins, CVPR 03 Abstract Mean-shift tracking Choosing scale of kernel is an issue Scale-space feature selection provides inspiration Perform mean-shift

More information

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA nstance-level l recognition Cordelia Schmid & Josef Sivic NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these

More information

Orientation Map Based Palmprint Recognition

Orientation Map Based Palmprint Recognition Orientation Map Based Palmprint Recognition (BM) 45 Orientation Map Based Palmprint Recognition B. H. Shekar, N. Harivinod bhshekar@gmail.com, harivinodn@gmail.com India, Mangalore University, Department

More information

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline EECS150 - Digital Design Lecture 15 SIFT2 + FSM Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Optical Flow, Motion Segmentation, Feature Tracking

Optical Flow, Motion Segmentation, Feature Tracking BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature

More information

Perception III: Filtering, Edges, and Point-features

Perception III: Filtering, Edges, and Point-features Perception : Filtering, Edges, and Point-features Davide Scaramuzza Universit of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Toda s outline mage filtering Smoothing Edge detection

More information

Motion Estimation (I)

Motion Estimation (I) Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Gradients and edges Points of sharp change in an image are interesting: change in reflectance change in object change

More information

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2 Corner Detection 2D Image Features Corners are important two dimensional features. Two dimensional image features are interesting local structures. They include junctions of dierent types Slide 3 They

More information

Edge Detection. Computer Vision P. Schrater Spring 2003

Edge Detection. Computer Vision P. Schrater Spring 2003 Edge Detection Computer Vision P. Schrater Spring 2003 Simplest Model: (Canny) Edge(x) = a U(x) + n(x) U(x)? x=0 Convolve image with U and find points with high magnitude. Choose value by comparing with

More information

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Corner. Corners are the intersections of two edges of sufficiently different orientations. 2D Image Features Two dimensional image features are interesting local structures. They include junctions of different types like Y, T, X, and L. Much of the work on 2D features focuses on junction L,

More information

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications Edge Detection Roadmap Introduction to image analysis (computer vision) Its connection with psychology and neuroscience Why is image analysis difficult? Theory of edge detection Gradient operator Advanced

More information

a Write down the coordinates of the point on the curve where t = 2. b Find the value of t at the point on the curve with coordinates ( 5 4, 8).

a Write down the coordinates of the point on the curve where t = 2. b Find the value of t at the point on the curve with coordinates ( 5 4, 8). Worksheet A 1 A curve is given by the parametric equations x = t + 1, y = 4 t. a Write down the coordinates of the point on the curve where t =. b Find the value of t at the point on the curve with coordinates

More information

DIFFERENTIAL GEOMETRY OF CURVES AND SURFACES 5. The Second Fundamental Form of a Surface

DIFFERENTIAL GEOMETRY OF CURVES AND SURFACES 5. The Second Fundamental Form of a Surface DIFFERENTIAL GEOMETRY OF CURVES AND SURFACES 5. The Second Fundamental Form of a Surface The main idea of this chapter is to try to measure to which extent a surface S is different from a plane, in other

More information

Faculty of Engineering, Mathematics and Science School of Mathematics

Faculty of Engineering, Mathematics and Science School of Mathematics Faculty of Engineering, Mathematics and Science School of Mathematics GROUPS Trinity Term 06 MA3: Advanced Calculus SAMPLE EXAM, Solutions DAY PLACE TIME Prof. Larry Rolen Instructions to Candidates: Attempt

More information

10.3 Parametric Equations. 1 Math 1432 Dr. Almus

10.3 Parametric Equations. 1 Math 1432 Dr. Almus Math 1432 DAY 39 Dr. Melahat Almus almus@math.uh.edu OFFICE HOURS (212 PGH) MW12-1:30pm, F:12-1pm. If you email me, please mention the course (1432) in the subject line. Check your CASA account for Quiz

More information

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.

More information

Lesson 04. KAZE, Non-linear diffusion filtering, ORB, MSER. Ing. Marek Hrúz, Ph.D.

Lesson 04. KAZE, Non-linear diffusion filtering, ORB, MSER. Ing. Marek Hrúz, Ph.D. Lesson 04 KAZE, Non-linear diffusion filtering, ORB, MSER Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni Lesson 04 KAZE ORB: an efficient alternative

More information

Math 210, Final Exam, Practice Fall 2009 Problem 1 Solution AB AC AB. cosθ = AB BC AB (0)(1)+( 4)( 2)+(3)(2)

Math 210, Final Exam, Practice Fall 2009 Problem 1 Solution AB AC AB. cosθ = AB BC AB (0)(1)+( 4)( 2)+(3)(2) Math 2, Final Exam, Practice Fall 29 Problem Solution. A triangle has vertices at the points A (,,), B (, 3,4), and C (2,,3) (a) Find the cosine of the angle between the vectors AB and AC. (b) Find an

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Optical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016

Optical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016 Optical flow Subhransu Maji CMPSC 670: Computer Vision October 20, 2016 Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes CMPSC 670 2 Motion and perceptual organization Sometimes,

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Michael J. Black Sept 2009 Lecture 8: Pyramids and image derivatives Goals Images as functions Derivatives of images Edges and gradients Laplacian pyramids Code for lecture

More information