Gaussian derivatives

Similar documents
Review Smoothing Spatial Filters Sharpening Spatial Filters. Spatial Filtering. Dr. Praveen Sankaran. Department of ECE NIT Calicut.

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

Filtering and Edge Detection

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

Lecture 7: Edge Detection

Colorado School of Mines Image and Multidimensional Signal Processing

ECE Digital Image Processing and Introduction to Computer Vision. Outline

Machine vision. Summary # 4. The mask for Laplacian is given

Edge Detection. CS 650: Computer Vision

Lecture 8: Interest Point Detection. Saad J Bedros

Machine vision, spring 2018 Summary 4

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

Lecture 8: Interest Point Detection. Saad J Bedros

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Spatial Enhancement Region operations: k'(x,y) = F( k(x-m, y-n), k(x,y), k(x+m,y+n) ]

Lecture 04 Image Filtering

Feature detection.

Image preprocessing in spatial domain

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Feature extraction: Corners and blobs

Convolutional networks. Sebastian Seung

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

Outline. Convolution. Filtering

Chap. 19: Numerical Differentiation

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

Linear Diffusion and Image Processing. Outline

Introduction to Linear Systems

Harris Corner Detector

arxiv: v1 [cs.cv] 10 Feb 2016

Achieving scale covariance

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q

CAP 5415 Computer Vision Fall 2011

ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

CAP 5415 Computer Vision

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

Convolution and Linear Systems

CSE 473/573 Computer Vision and Image Processing (CVIP)

Blobs & Scale Invariance

Today s lecture. Local neighbourhood processing. The convolution. Removing uncorrelated noise from an image The Fourier transform

Computer Vision & Digital Image Processing

Statistical Geometry Processing Winter Semester 2011/2012

Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, Darmstadt, Germany

Empirical Mean and Variance!

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Feature detectors and descriptors. Fei-Fei Li

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Scale-space image processing

Scale & Affine Invariant Interest Point Detectors

Linear Diffusion. E9 242 STIP- R. Venkatesh Babu IISc

Announcements. Filtering. Image Filtering. Linear Filters. Example: Smoothing by Averaging. Homework 2 is due Apr 26, 11:59 PM Reading:

Edge Detection in Computer Vision Systems

Feature detectors and descriptors. Fei-Fei Li

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Medical Image Analysis

Wavelet-based Salient Points with Scale Information for Classification

Signal Processing COS 323

Image Analysis. Feature extraction: corners and blobs

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Lecture 3: Linear Filters

IMAGE ENHANCEMENT II (CONVOLUTION)

Medical Image Analysis

Computer Vision Lecture 3

Prof. Mohd Zaid Abdullah Room No:

9. Image filtering in the spatial and frequency domains

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

Detectors part II Descriptors

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Case Studies of Logical Computation on Stochastic Bit Streams

CS5670: Computer Vision

Multiscale Image Transforms

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

L(x; 0) = f(x) This family is equivalent to that obtained by the heat or diffusion equation: L t = 1 2 L

Image Gradients and Gradient Filtering Computer Vision

Linear Operators and Fourier Transform

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Part 2 Introduction to Microlocal Analysis

Taking derivative by convolution

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Lecture 3: Linear Filters

FILTERING IN THE FREQUENCY DOMAIN

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

ECE 425. Image Science and Engineering

Scale Space Smoothing, Image Feature Extraction and Bessel Filters

Digital Image Processing COSC 6380/4393

Image processing and Computer Vision

Integration - Past Edexcel Exam Questions

INTEREST POINTS AT DIFFERENT SCALES

Where is the bark, the tree and the forest, mathematical foundations of multi-scale analysis

MATH The Derivative as a Function - Section 3.2. The derivative of f is the function. f x h f x. f x lim

CS5540: Computational Techniques for Analyzing Clinical Data. Prof. Ramin Zabih (CS) Prof. Ashish Raj (Radiology)

Local Enhancement. Local enhancement

Differential Operators for Edge Detection

Transcription:

Gaussian derivatives UCU Winter School 2017 James Pritts Czech Tecnical University January 16, 2017 1 Images taken from Noah Snavely s and Robert Collins s course notes

Definition An image (grayscale) is a function, I, from R 2 to R such that I (x, y) gives the intensity at position (x, y). Definition A digital image (grayscale) is a sampled and quantized version of I. The discrete values are indexed as I [x, y].

Why do we need image derivatives? Image derivatives will be used to construct discrete operators that detect salient differential geometry in the scene. Some desiderata sparse representation of the image repeatability salient features invariance to photometric and geometric transforms of the image Examples include Harris corners Hessian Affine (blobs) Maximally Stable Extremal Regions

Finite forward difference Taylor series expansion I (x+h) = I (x)+hi x (x)+ 1 2 h2 I x (xx)+ 1 3! h3 I xxx (x)+o(h 4 ) = I (x + h) I (x) h = I x (x) + O(h) = Template -1 1 I x [x] I [x + h] I [x] h

Finite backward difference Taylor series expansion I (x h) = I (x) hi x (x)+ 1 2 h2 I xx (x) 1 3! h3 I xxx (x)+o(h 4 ) = I (x) I (x h) h = I x (x) + O(h) = Template -1 1 I x [x] I [x] I [x h] h

Central difference Taylor series expansion I (x + h) = I (x) + hi x (x) + 1 2 h2 I xx (x) + 1 3! h3 I xxx (x) + O(h 4 ) I (x h) = I (x) hi x (x) + 1 2 h2 I xx (x) 1 3! h3 I xxx (x) + O(h 4 ) = I (x + h) I (x h) = I (x) + 2hI x (x) + 2 3! h3 I xxx (x) = I (x + h) I (x h) 2h = I x (x) + O(h 2 ) = Template 1 2 0 1 2 I x [x] I [x + h] I [x h] 2h

Convolution in spatial domain (I f )[x, y] = k k i= k j= k I [i, j] f [x i][y j] convolution is equivalent to flipping the filter in both dimensions and correlating same result for symmetric kernels many libraries conflate convolution and correlation so why NOT just use cross-correlation?

Applying derivative to an image row i forward difference f o r ( j = j s t a r t ; j <= j e n d ; j ++) h [ j ]= I [ j +1] I [ j ] ; let f [0] = 1 and f [1] = 1, then row i Forward difference with mask f [0] = 1 and f [1] = 1. f o r ( j = j s t a r t ; j <= j e n d ; j ++) h [ j ]= f [ 0 ] I [ j +1]+ f [ 1 ] I [ j ] ; row i with arbitrary derivative defined by mask f, e.g, f [ 1] = 1 2, f [0] = 0, f [1] = 1 2 f o r ( j = j s t a r t ; j <= j e n d ; j ++) { h [ j ] = 0 ; f o r ( b = b s t a r t ; b <= bend ; b++) h [ j ] + = I [ b ] f [ j b ] ; }

Image derivatives are convolutions row i with arbitrary derivative defined by mask f. f o r ( i = i s t a r t ; i <= i e n d ; i ++) f o r ( j = j s t a r t ; j <= j e n d ; j ++) { h [ i ] [ j ] = 0 ; f o r ( a = a s t a r t ; a <= aend ; a++) f o r ( b = b s t a r t ; b <= bend ; b++) h [ i ] [ j ] + = I [ a ] [ b ] f [ i a ] [ j b ] ; } Loops over a, b is discrete convolution at i, j. h[i, j] := (I f )[i, j] I [a, b]f [i a, j b] a b

Image Gradient I I x I y I = ( I x )2 + ( I y )2

Artificially added white noise N (µ = 0, σ = 16)

Effect of white noise image derivatives I [x, y = j] Noisy image d dx I [x, y = j]

σ = 0 σ = 1 σ = 5 σ = 10 σ = 30 Why do we use the Gaussian as a low-pass filter? white noise exists at all frequencies corners and edges are represented by high frequencies need to remove noise but maintain details Fourier transform of Gaussian is Gaussian: Gaussian does not have a sharp cutoff at some pass band frequency and does not oscillate. weighted averaging is spatial blurring is low-pass filtering. we will blur in the spatial domain with Gaussian

Box-filter vs Gaussian box original Gaussian Box-filter garbles high-frequency signal while removing noise. (not doing good things in the frequency domain)

Gaussian kernel g σ [x, y] = 1 e (x 2 +y 2 ) 2πσ 2 2σ 2 σ = 1 σ = 5 σ = 10 σ = 30

Gaussian smoothing f g I g d dx (I g)

Separability of Gaussian kernels Definition A 2D kernel g is called separable if it can be broken down into the convolution of two kernels: g = g (1) g (2). and (I g σ )[x, y] = i i j g (1) g σ [x, y] = 1 x 2 +y 2 2πσ 2 e 2σ 2 = 1 e x2 1 2σ 2 e y2 2σ 2 2πσ 2πσ = g σ (1) [x] g σ (2) [y] g σ [x i, x j]i [i, j] =... σ [x i]g σ (2) [x j]i [i, j] = i j g σ (1) [x i] j g (2) σ [x j]i [i, j] =... (g (1) σ (g (2) σ I ))[x, y]

Complexity of convolution in spatial domain What are the number of operations and complexity of kxk-dimension kernel on and mxn-dimension image for a non-separable kernel: k 2 m n operations, complexity O(k 2 ) separable kernel: 2 k m n operations, complexity O(k) It pays to take advantage of separability!

Important Gaussian derivative properties Image differentiation d dx is a convolution on image I. Smoothing by Gaussian kernel g is a convolution on image I. 2D Gaussian kernel is separable g = g (1) g (2). Convolution is commutative I g = g f associative (I g) h = I (g h) So d dx (I g) = I d dx g = (I ( d dx g (1) )) g (2) g d dx g

First Derivatives of a Gaussian d dx g d dy g

Gaussian derivatives like a boss. If you want to level up, then you can exploit a recurrence relation of Hermite polynomials to algorithimically construct Gaussian derivatives of any order without convolution or symbolic differentiation.