Advanced Edge Detection 1

Similar documents
Filtering and Edge Detection

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Edge Detection. CS 650: Computer Vision

Lecture 7: Edge Detection

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

Edge Detection. Image Processing - Computer Vision

Lecture 04 Image Filtering

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news

Edge Detection. Computer Vision P. Schrater Spring 2003

Machine vision. Summary # 4. The mask for Laplacian is given

CSE 473/573 Computer Vision and Image Processing (CVIP)

Machine vision, spring 2018 Summary 4

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Image Gradients and Gradient Filtering Computer Vision

CS4495/6495 Introduction to Computer Vision. 2A-L6 Edge detection: 2D operators

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q

Vectors [and more on masks] Vector space theory applies directly to several image processing/ representation problems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros

ECE Digital Image Processing and Introduction to Computer Vision. Outline

Templates, Image Pyramids, and Filter Banks

Feature extraction: Corners and blobs

Introduction to Computer Vision

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Feature Extraction and Image Processing

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms

Spatial Enhancement Region operations: k'(x,y) = F( k(x-m, y-n), k(x,y), k(x+m,y+n) ]

Multiresolution schemes

Edge Detection in Computer Vision Systems

Multiresolution schemes

arxiv: v1 [cs.cv] 10 Feb 2016

Wavelet-based Salient Points with Scale Information for Classification

ECE Digital Image Processing and Introduction to Computer Vision

Computer Vision & Digital Image Processing

I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 4. Filtering

Multiscale Image Transforms

Additional Pointers. Introduction to Computer Vision. Convolution. Area operations: Linear filtering

Basics on 2-D 2 D Random Signal

Chapter 3 Salient Feature Inference

Image as a signal. Luc Brun. January 25, 2018

SIFT: Scale Invariant Feature Transform

Robert Collins CSE598G Mean-Shift Blob Tracking through Scale Space

Review Smoothing Spatial Filters Sharpening Spatial Filters. Spatial Filtering. Dr. Praveen Sankaran. Department of ECE NIT Calicut.

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Suppose we have a one-dimensional discrete signal f(x). The convolution of f with a kernel g is defined as h(x) = f(x)*g(x) = w 1. f(i)g(x i), (3.

Harris Corner Detector

Image Segmentation: Definition Importance. Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Assorted DiffuserCam Derivations

Discrete Fourier Transform

Taking derivative by convolution

Sparse linear models

Digital Image Processing

Reconnaissance d objetsd et vision artificielle

Blobs & Scale Invariance

Generalized Laplacian as Focus Measure

Feature Extraction Line & Curve

Signal Processing COS 323

CS5670: Computer Vision

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

Scale & Affine Invariant Interest Point Detectors

DESIGN OF MULTI-DIMENSIONAL DERIVATIVE FILTERS. Eero P. Simoncelli

3. Lecture. Fourier Transformation Sampling

2D Wavelets. Hints on advanced Concepts

Image Processing 1 (IP1) Bildverarbeitung 1

CS 556: Computer Vision. Lecture 21

Lecture Notes 5: Multiresolution Analysis

Empirical Mean and Variance!

Achieving scale covariance

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Lecture Notes on the Gaussian Distribution

Blob Detection CSC 767

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Announcements. Filtering. Image Filtering. Linear Filters. Example: Smoothing by Averaging. Homework 2 is due Apr 26, 11:59 PM Reading:

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Orientation Map Based Palmprint Recognition

Introduction to Computer Vision. 2D Linear Systems

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

EECS490: Digital Image Processing. Lecture #11

Linear Diffusion and Image Processing. Outline

Linear Operators and Fourier Transform

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

6.869 Advances in Computer Vision. Bill Freeman, Antonio Torralba and Phillip Isola MIT Oct. 3, 2018

Feature detectors and descriptors. Fei-Fei Li

Understanding How ConvNets See

CITS 4402 Computer Vision

DISCRETE FOURIER TRANSFORM

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

Wavelet Transform. Figure 1: Non stationary signal f(t) = sin(100 t 2 ).

CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation

Problem Session #5. EE368/CS232 Digital Image Processing

Transcription:

Advanced Edge Detection 1 Lecture 4 See Sections 2.4 and 1.2.5 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information. 1 / 27

Agenda 1 LoG and DoG 2 Phase-Congruency Model 3 Embedded Confidence 2 / 27

Laplacian of Gaussian (LoG) Edge Detector Convolution theorem for LoG: 2 (G I ) = I 2 G Follows from applying twice the general rule of convolution: D(F H) = D(F ) H = F D(H) where D is a derivative Conclusion: only one convolution needed with 2 G G x x (x, y) = e (x2 +y 2 )/2σ 2 2πσ 4 G y y (x, y) = e (x2 +y 2 )/2σ 2 2πσ 4 ( ) 2 G(x, y) = 1 x 2 +y 2 2σ 2 e (x2 +y 2 )/2σ 2 2πσ 4 σ 2 3 / 27

The Mexican Hat as Filter Kernel w = x 1 x 2 = 2 2σ; sample a 3w 3w kernel σ = 1, thus 3w = 8.485..., thus 9 9 (i.e., k = 4) 4 / 27

Difference of Gaussians (DoG) Approximation of LoG for reduced run time Scale s = 2σ 2 Level function L(x, y, s) = [I G s ](x, y) DoG for initial scale s and scaling factor k > 1: D s,k (x, y) = L(x, y, s) L(x, y, ks) We have that D s,1.4 (x, y) [ 2 (G s I )](x, y) Edges detected by zero-crossings (as for the LoG) 5 / 27

Agenda 1 LoG and DoG 2 Phase-Congruency Model 3 Embedded Confidence 6 / 27

Kovesi Edge Map Edge map resulting when applying the Kovesi algorithm 7 / 27

Recall: Magnitude and Phase of Complex Numbers DFT maps real-valued images into a complex-valued Fourier transform z = a + 1b defined in polar coordinates by magnitude z 2 = r = a 2 + b 2 and phase α = arctan(b/a) 8 / 27

Local Fourier Transform p = (x, y) in image I and (2k + 1) (2k + 1) filter kernel J(u, v) = 0 u, v 2k + 1 1 (2k + 1) 2 2k 2k i=0 j=0 I (x + i, y + j) W iu 2k+1 W yv N rows 9 / 27

Vector Sum of Complex Numbers J composed of n = (2k + 1) 2 complex numbers z h, for 1 h n Each z h defined by r h = z h 2 and α h Imaginary part r 4 z r 3 α 4 r 2 α 3 r 1 α 2 α 1 Real part Addition of four complex numbers z h = (r h, α h ) 10 / 27

The Phase-Congruency Model for Edges 0 C phase (p) = z 2 n h=1 r h 1 C phase (p) = 1 defines perfect congruency C phase (p) = 0 for perfectly opposing phases and magnitudes Local phase congruency identifies features in images High phase congruency at adjacent pixels defines edges 11 / 27

Kovesi Algorithm 1 Apply n Gabor-filter functions for an approximate local DFT 2 Quantify phase congruency for resulting (r h, α h ), 1 h n also incorporating noise compensation 3 If phase congruency T then edge pixel Only one threshold parameter T if set of Gabor functions is fixed Example of a Gabor function (also known as Gabor wavelet) 12 / 27

Agenda 1 LoG and DoG 2 Phase-Congruency Model 3 Embedded Confidence 13 / 27

Confidence Measure for a Feature Detector A confidence measure is quantified information derived from calculated data, to be used for deciding about the existence of a particular feature; if calculated data match the underlying model of the feature detector reasonably well then this should correspond to high values of the confidence measure. 14 / 27

Meer-Georgescu Algorithm 1: for every pixel p in image I do 2: estimate gradient magnitude g(p) and edge direction θ(p); 3: compute the confidence measure η(p); 4: end for 5: for every pixel p in image I do 6: determine value ρ(p) in the cumulative distribution of gradient magnitudes; 7: end for 8: generate the ρη diagram for mage I ; 9: perform non-maxima suppression; 10: perform hysteresis thresholding; Four parameters: gradient magnitude g(p) = g(p) 2, gradient direction θ(p), edge confidence η(p), percentile ρ k of cumulative gradient magnitude distribution 15 / 27

Gradient Magnitude and Gradient Direction Estimators Trace Tr(A) of n n matrix A = (a ij ) ij is the sum b i=1 a ii of (main) diagonal elements A a matrix representation of (2k + 1) (2k + 1) window W p (I ) W a row-/column-symmetric (2k + 1) (2k + 1) matrix of weights d 1 = Tr(WA) and d 2 = Tr(AW ) g(p) = d 2 1 + d 2 2 and θ(p) = arctan ( d 1 d 2 ) Note: not in x- and y-direction, but 45 rotated directions; but we could also use estimators as defined earlier 16 / 27

Percentile and Confidence Gradient-magnitudes g [1] <... < g [k] <... < g [N] in image I with ρ k = Prob [ g g [k] ] for 1 k N. If g [k] the closest real to edge magnitude g(p) then Percentile: ρ(p) = ρ k between 0 and 1 Let A ideal be a (2k + 1) (2k + 1) matrix representing a template of an ideal step edge having gradient direction θ(p). Confidence: η(p) = Tr(A ideala) between 0 and 1 17 / 27

ρη Diagram 1 η H L H > 0 H < 0 L < 0 L > 0 q 1 p q 2 0 0 ρ 1 Left: Implicitly given curves L(ρ, η) = 0 and H(ρ, η) = 0 separate square into points with positive or negative signs. Right: Virtual neighbors q 1 and q 2 in estimated gradient direction 18 / 27

Non-Maxima Suppression Define a curve X (ρ, η) = 0 in ρη space 1 For current pixel p, determine virtual neighbors q 1 and q 2 2 Determine ρ and η values for q 1 and q 2 by interpolation of values at adjacent pixel locations 3 p describes with respect to X a maximum if both virtual neighbors q 1 and q 2 have a negative sign for curve X Non-maxima suppression in Step 9: only remaining pixels are candidates for the edge map. 19 / 27

Hysteresis Thresholding Hysteresis thresholding is a general technique to decide in a process based on previously obtained results, attempting to continue as before. Have two hysteresis thresholds L and H in ρη space. Pixel p with values ρ and η stays on the edge map if 1 L(ρ, η) > 0 and H(ρ, η) 0 or 2 p is adjacent to a pixel in the edge map and satisfies L(ρ, η) H(ρ, η) < 0 Second condition describes hysteresis thresholding; it is applied recursively. 20 / 27

Options for the Meer-Georgescu Algorithm The Meer-Georgescu algorithm can be a 1 Canny edge detector of the gradient magnitudes if the two hysteresis thresholds are vertical lines, and 2 a confidence only detector if the two lines are horizontal 21 / 27

Results Results of the Meer-Georgescu algorithm with a larger (left) or a smaller (right) filter kernel 22 / 27

Original Image Set1Seq1 23 / 27

Kovesi Edge Detector Results of the Kovesi algorithm using different thresholds T 24 / 27

Sobel and Canny Edge Detector Results of Sobel detector (left) and of Canny operator (right) 25 / 27

Adaptation, and Model Diversity versus Simplicity There is no best edge detector, it all depends on the application context. Adaptation is often the word - operator and parameters need to be adjusted to the given data. Design ideas such as step-edge or phase-congruency model, combination of 1st and 2nd order derivatives in the first case, accumulated evidence based on results at adjacent pixels (hysteresis thresholding), confidence measures, thinning of resulting edges (non-maxima suppression) can be used for going towards adapted solutions. A simple method such as the Sobel operator is still often useful because it does not modify data at this early processing stage based on some model which might not be true in general. 26 / 27

Copyright Information This slide show was prepared by Reinhard Klette with kind permission from Springer Science+Business Media B.V. The slide show can be used freely for presentations. However, all the material is copyrighted. R. Klette. Concise Computer Vision. c Springer-Verlag, London, 2014. In case of citation: just cite the book, that s fine. 27 / 27