Low-level Image Processing

Similar documents
Edge Detection. CS 650: Computer Vision

Estimators for Orientation and Anisotropy in Digitized Images

Lecture 8: Interest Point Detection. Saad J Bedros

Math 302 Outcome Statements Winter 2013

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

Discriminative Direction for Kernel Classifiers

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Confidence and curvature estimation of curvilinear structures in 3-D

Data Preprocessing Tasks

Lectures in Discrete Differential Geometry 2 Surfaces

Spectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course

Introduction to Machine Learning

Probabilistic Graphical Models

Lecture 8: Interest Point Detection. Saad J Bedros

Deformation and Viewpoint Invariant Color Histograms

Scale & Affine Invariant Interest Point Detectors

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

Covariance and Principal Components

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

STA 4273H: Statistical Machine Learning

Chapter 3 Salient Feature Inference

IFT LAPLACIAN APPLICATIONS. Mikhail Bessmeltsev

Lecture 7: Con3nuous Latent Variable Models

A Statistical Analysis of Fukunaga Koontz Transform

Principal Component Analysis

Covariance Tracking Algorithm on Bilateral Filtering under Lie Group Structure Yinghong Xie 1,2,a Chengdong Wu 1,b

Statistical Machine Learning

Feature detectors and descriptors. Fei-Fei Li

Mixture Models and EM

Object Recognition Using Local Characterisation and Zernike Moments

Lie Groups for 2D and 3D Transformations

Pattern Recognition and Machine Learning

MULTIVARIABLE CALCULUS, LINEAR ALGEBRA, AND DIFFERENTIAL EQUATIONS

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES

CHAPTER 3. Gauss map. In this chapter we will study the Gauss map of surfaces in R 3.

Dimensionality Reduction

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

INTEREST POINTS AT DIFFERENT SCALES

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Spectral Generative Models for Graphs

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Feature detectors and descriptors. Fei-Fei Li

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Designing Information Devices and Systems II Fall 2015 Note 5

Principal Component Analysis (PCA)

Statistical Pattern Recognition

Filtering via a Reference Set. A.Haddad, D. Kushnir, R.R. Coifman Technical Report YALEU/DCS/TR-1441 February 21, 2011

9.1 Mean and Gaussian Curvatures of Surfaces

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Face detection and recognition. Detection Recognition Sally

Reconnaissance d objetsd et vision artificielle

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

Image Analysis. PCA and Eigenfaces

Nonlinear Manifold Learning Summary

Filtering and Edge Detection

Image Analysis. Feature extraction: corners and blobs

Feature extraction: Corners and blobs

Preprocessing & dimensionality reduction

Upon successful completion of MATH 220, the student will be able to:

Markov Random Fields

Edge Detection. Computer Vision P. Schrater Spring 2003

Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, Darmstadt, Germany

Line Element Geometry for 3D Shape Understanding and Reconstruction

L26: Advanced dimensionality reduction

Real Time Face Detection and Recognition using Haar - Based Cascade Classifier and Principal Component Analysis

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Geometric Modelling Summer 2016

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

L11: Pattern recognition principles

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Complete Surfaces of Constant Gaussian Curvature in Euclidean Space R 3.

Chapter 3 Transformations

EECS 275 Matrix Computation

CS 231A Section 1: Linear Algebra & Probability Review

Mathematical Problems in Image Processing

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

Differential Geometry and Lie Groups with Applications to Medical Imaging, Computer Vision and Geometric Modeling CIS610, Spring 2008

PCA, Kernel PCA, ICA

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Templates, Image Pyramids, and Filter Banks

Chris Bishop s PRML Ch. 8: Graphical Models

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Harbor Creek School District

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q

EECS490: Digital Image Processing. Lecture #26

Deriving Principal Component Analysis (PCA)

Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Classical differential geometry of two-dimensional surfaces

Transcription:

Low-level Image Processing

In-Place Covariance Operators for Computer Vision Terry Caelli and Mark Ollila School of Computing, Curtin University of Technology, Perth, Western Australia, Box U 1987, Emaihtmc@cs.mu.oz.au Australia Abstract. Perhaps one of the most common low-level operations in computer vision is feature extraction. Indeed, there is already a large number of specific feature extraction techniques available involving transforms, convolutions, filtering or relaxation-type operators. Albeit, in this paper we explore a different approach to these more classical methods based on non-parametric in-place covariance operators and a geometric model of image data. We explore these operators as they apply to both range and intensity data and show how many of the classical features can be redefined and extracted using this approach and in more robust ways. In particular, we explore how, for range data, surface types, jumps and creases have a natural expression using first and second-order covariance operators and how these measures relate to the well-known Weingarten map. For intensity data, we show how edges, lines corners and textures also can be extracted by observing the eigenstructures of similar firstand second-order covariance operators. Finally, robustness, limitations and the non-parametric nature of this operator are also discussed and example range and intensity image results are shown. 1 Introduction We have explored the use of covariance operators for the computation of local shape measures relevant to range and intensity (surface) image data. The covariance approach dispenses with surface parameterization and provides invariant descriptions of shape via the eigenvalues and eigenvectors of covariance matrices of different orders. The aim of this paper is to summarize the properties and results of the covariance approach as they can are applied to feature extraction in both types of surface image data. Following Liang and Todhunter [1] we define the local (first-order) surface covariance matrix (CI) as: Z =,_,( - -, (1) 4=1 where ~i = (mi, yi, z~) correspond to the image projection plane (z, y) and (z~) corresponds to depth or intensity values at position i; ~ = 1/n ~=1 ~ to the mean position vector; n to the total number of pixels in the neighborhood of ~ used to compute Eqn.(1).

376 The eigenvalues and eigenvectors of Oi determine three orthogonal vectors two of which define a plane whose orientation is such that it minimizes, in the least square sense, the (squared distance) orthogonal projections.of all the points (n) onto the plane. Liang and Todhunter[1] originally proposed this plane as a reasonable approximation to the surface tangent plane and, so long as the two eigenvectors are chosen to preserve the '%idedness" of the surface points, it can be viewed as an analogue to this. That is, the eigenvectors corresponding to the two largest eigenvalues (~z and ~2) form this "tangent plane" if "sidedness" is preserved, otherwise the plane must be formed by the eigenvectors corresponding to eigenvalues ~2 and )~3. This "sidedness"-preserving or topology preserving principle, can be seen as a replacement of surface parameterization. It is important to note that the eigenvalues of Uz are already invariant to rigid motions and they also define the aspect rations of oriented spheres which determine the strengths of directional information in the neighborhood of a pixel. As we will see, from a geometric perspective such eigenvalues and vectors define surface types while, from an intensity image perspective, they can be used to infer edge, corners and other local shape structures in an image. First, the geometric interpretation. 2 Local geometries. Prom a Differential Geometry perspective, Cz defines local tangent planes and surface normals (defining the first fundamental form) in the least-squares sense. Prom this an analogous definition of the second fundamental form follows. We define, about a point 50 on a surface, a two-dimensional covariance matrix in the following manner: c. = _1 y~(~ - ~,~) 9 (~ - ~,~)~, (2) with the two dimensional vectors yi defined by the projection: ~ : ~. (~~ - ~o)v.oj, (3) where x~ is in a small neighborhood of zo 9 We project the difference vector which points from zo to ~ onto the "tangent plane" as determined by Eqn.(1) and weight the resulting two--dimensional vector by distance s~ which measures the orthogonal distance from the "tangent plane" to point ~. Given both tangent vectors ~'1 and t2 and the normal vector ~ at point ~o derived from Eqn.(2) we may write yi explicitly as: _ 9, \(~ ~o)~.~ (4)

377 We can then define the quadratic form lzc = (s) as a "second fundamental form" based on covariance methods with the defined covariance matrix according to Eqn. (2) and a chosen unit vector v in the tangent plane. Analogous to classical computations of surface geometry we define the principal directions as those directions which are given by the eigenvectors of this covariance matrix. The eigenstructures of CH capture how the surface points, in the neighbourhood of a pixel, depart from the estimation of the tangent plane. Further to this, we have an analogous operator to the Gauss map of classical Differential Geometry[2]. That is, we can determine how the estimated surface normals, in the neighborhood of a pixel, project onto the estimated tangent plane by the two-dimensional covariance matrix:?1 i=1 with the two dimensional vectors v, being defined by the projection: ( T-rl) = oj = r2 ' (7) where tl,t2 are the estimated tangent vectors obtained from Eqn.(1) and assigned to the point (~, y, z) at the center of the current window and ni is the normal vector obtained by Eqn.(1) at a position (z,, y,, z,) in the neighborhood of y, z). In all, then, the operators CI, CII and C a determine local geometric char- acteristics of surfaces without the use of Calculus and with the constraint that such eignestructures correspond to least squares estimates of different order orientation fields. They can be used in identifying different surface types - including discontinuities - in the following ways (using appropriate thresholds, see [2]: Jump-edge detection: The covariance matrix CI, according to Eqn.(1), is calculated in a 5 x 5 pixel neighborhood at each pixel of the range image. Pixels with values of the maximal eigenvalue larger than a certain threshold are labeled as jump. The eigenvectors calculated in this stage are used for the next step. Crease-edge detection: The covariance matrix Ca, according to Eqn.(6), is calculated in a 5 x 5 pixel neighborhood at each pixel of the range image. The maximal eigenvalue of this covariance matrix Cp is utilized as a crease-edge detector. Pixels not already labeled as jump and with values of the maximal eigenvalue larger than a certain threshold are labeled as crease. Region segmentation: Pixels neither being labeled as jump nor crease are labeled as planar, parabolic (developable) or curved. In order to do so the covariance matrix Cz according to Eqn.(1) is calculated in a 7 7 pixel neighborhood at each pixel of the range image. The eigenvectors of this covariance matrix are again used as input for the next stage which calculates the covariance matrix Cp

378 according to Eqn.(3) from the projected normal vectors in a 7 x 7 pixel neighborhood where only pixels which have not been labeled so far as jump or crease are taken into account. Two thresholds for each of the two eigenvalues of this covariance matrix are applied. Pixels with smaller values of both eigenvalues than the thresholds are labeled as planar. Pixels with a smaller value of the smaller eigenvalue than the threshold but with values of the larger value beyond the corresponding threshold are labeled as parabolic. Pixels not meeting above conditions are assigned the label curved. We have obtained experimental results for the proposed segmentation technique using many synthetic range images - one of which is shown in Figure la. The range images have neither been filtered nor preprocessed in any way. However, we have computed surface Mean and Gaussian curvatur~es using the biquadratic surface algebraic form proposed in [2]. We have also used the jump-and crease-edge detection technique presented in [3]for comparison. Together these region and boundary detection methods are termed the Smoothed Differential Geometry, or SDG method. 3 Image features Viewing intensity images as surfaces is not that common but is natural to this covariance approach. Indeed, form this perspective, "edges", "corners" and "linear segments" all can be described by different types of surface types [5] including jumps, creases and non-planar region types. Further, we can even provide firstand second-order descriptions of "textural" features in terms of the associated eigenstructures of the covariance operators (Figure lc). Figure lb and c shows examples of such interpretations. Lines were determined by first-order eigenvalue ratios, corners by second-order eigenvalue strengths. 4 Discussion The aim of this paper was to simple expose and demonstrate the covariance approach to feature extraction. It contrasts with more classical methods - and, in particular, differential and filter-based techniques, in two ways. One, there is no use of parameterization or differentiation directly. Two, there is an implicit optimization criterion: orthogonal least squares approximations of linear structures of different orders. Furthermore, we have been able to define analogous operators to the classical Weingarten map (second fundamental form) and the Gauss map on the basis of covariance matrices. The eigenvalues of the covariance matrices are invariant to rigid motion as well - since the computation operates in the tangent plane only. In addition, we have shown how the covariance method treats discontinuities in a very natural manner. Finally, since covariance methods do not rely on a consistent local surface parameterization, the spectrum of the covariance matrix (the full set of eigenvalues) provides us with a type of smooth transformation between lines, surfaces

379 and volumes. One non-zero eigenvalue defines merely linear components while a solution of three equal eigenvalues corresponds to data of uniform density in a volume about a point - analogous to fractile-dimensionality. It should also be noted that covariance methods provide ideal ways of treating signals embedded in additive Gaussian or white noise.in these cases the total covariance matrix decomposes into the sum of the signal and noise covariance matrices. These covariance models also provide linear approximations to the local correlations or dependencies between pixels. Consequently, they are related to techniques for modeling the observed (local) pixel "cliques" or "support" kernels defined in Markov Random Fields and determined using Hidden Markov Models. However, space prohibits more detailed analysis of these connections. 5 References [1] Liang, P. and Todhunter, J.(1990) Representation and recognition of surface shapes in range images. Computer Vision, Graphics and Image Processing, 52, 78-109. [2] Berkman, :l. and Caelli, T.(1994) Computation of Surface Geometry and Segmentation using Covarinace Techniques.IEEE: Trans. Pa~. Anal. and Machine InSelL (In Press) [3] Besl, P. and Jain, R.(1986) Invariant surface characteristics for 3D object recognition in range images. CompuSer Vision, Graphics and Image Processing, 33, 33-80. [4] Yokine, N., and Levine, M.(1989) Range IMage Segmentation Based on Differential Geometry: A Hybrid Approach.IEEE: Trans. Pa~. Anal. and Machine In~ell., 11, 6, 643-649. [5] Barth, E., Caelli, T. and Zetzsche. C (1993) Image Encoding, Labeling, and Reconstruction from Differential Geometry. CompuSer Vision, Graphics and Image Processing: Graphical Models and Image Processing, 55, 6, 428-446. Figure 1 - Following Page Top: Shows input range image; second row:jumps(white), creases(grey) for SDG(left) and covariance(right) methods; third row: region segments from. SDG(left) and covariance(right) methods. Middle: Input textures(left) and rotation invariant segmentation based on first-order covariance eigenvalues(right) and 2-class K-Means clustering. Bottom: Input intensity image(left)lines(centre), and corners(right) using first and second-order covariance eigenvalues respectively.

380