Segmentace textury. Jan Kybic

Similar documents
Logarithmic quantisation of wavelet coefficients for improved texture classification performance

Pattern Recognition. Parameter Estimation of Probability Density Functions

Jádrové odhady regresní funkce pro korelovaná data

EECS490: Digital Image Processing. Lecture #26

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

Change Detection in Optical Aerial Images by a Multi-Layer Conditional Mixed Markov Model

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University

Computer Assisted Image Analysis

Classification of High Spatial Resolution Remote Sensing Images Based on Decision Fusion

arxiv: v1 [cs.cv] 10 Feb 2016

Boosting: Algorithms and Applications

Multiresolution analysis & wavelets (quick tutorial)

Ex: Boolean expression for majority function F = A'BC + AB'C + ABC ' + ABC.

Computation of Information Value for Credit Scoring Models

Least Squares and Kalman Filtering Questions: me,

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides

Shape of Gaussians as Feature Descriptors

Logistic Regression. Sargur N. Srihari. University at Buffalo, State University of New York USA

Classification with Perceptrons. Reading:

Least Squares Estimation Namrata Vaswani,

Curve Fitting Re-visited, Bishop1.2.5

Maximum Likelihood Estimation. only training data is available to design a classifier

Lecture 8: Interest Point Detection. Saad J Bedros

Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition

Mathematical Formulation of Our Example

Feature detectors and descriptors. Fei-Fei Li

Linear Dynamical Systems (Kalman filter)

Course content (will be adapted to the background knowledge of the class):

Jádrové odhady gradientu regresní funkce

Object Recognition. Digital Image Processing. Object Recognition. Introduction. Patterns and pattern classes. Pattern vectors (cont.

9. High-level processing (astronomical data analysis)

Detectors part II Descriptors

Module 7:Data Representation Lecture 35: Wavelets. The Lecture Contains: Wavelets. Discrete Wavelet Transform (DWT) Haar wavelets: Example

Digital Image Processing Chapter 11 Representation and Description

A Discriminatively Trained, Multiscale, Deformable Part Model

6.869 Advances in Computer Vision. Bill Freeman, Antonio Torralba and Phillip Isola MIT Oct. 3, 2018

Hidden Markov Models Part 1: Introduction

Algorithmisches Lernen/Machine Learning

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA

Bayesian Decision Theory

Feature detectors and descriptors. Fei-Fei Li

When Dictionary Learning Meets Classification

A Wavelet based Damage Diagnosis Algorithm Using Principal Component Analysis

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Basic Principles of Video Coding

Can the sample being transmitted be used to refine its own PDF estimate?

Probability Models for Bayesian Recognition

Multimedia Networking ECE 599

Assessing Limits of Classification Accuracy Attainable through Maximum Likelihood Method in Remote Sensing

Introduction to Wavelets and Wavelet Transforms

Face detection and recognition. Detection Recognition Sally

Lecture 8: Interest Point Detection. Saad J Bedros

Chapter 7: Model Assessment and Selection

Basic Concepts of. Feature Selection

Feature Engineering, Model Evaluations

A New Efficient Method for Producing Global Affine Invariants

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

Detection of Anomalies in Texture Images using Multi-Resolution Features

Expectation propagation for signal detection in flat-fading channels

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No Doubravka STŘEDOVÁ *, Petr TOMEK **

Spectral and Textural Feature-Based System for Automatic Detection of Fricatives and Affricates

Modeling Multiscale Differential Pixel Statistics

A New Unsupervised Event Detector for Non-Intrusive Load Monitoring

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Reconnaissance d objetsd et vision artificielle

Classifying Galaxy Morphology using Machine Learning

Statistical Filters for Crowd Image Analysis

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No OPTIMIZATION OF THE HOOD OF DIESEL ELECTRIC LOCOMOTIVE

Hierarchical Sparse Bayesian Learning. Pierre Garrigues UC Berkeley

Using MERIS and MODIS for Land Cover Mapping in the Netherlands

INTEREST POINTS AT DIFFERENT SCALES

What is Motion? As Visual Input: Change in the spatial distribution of light on the sensors.

6.867 Machine Learning

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Quantitative Analysis of Terrain Texture from DEMs Based on Grey Level Co-occurrence Matrix

SPECKLE: MODELING AND FILTERING. Torbjørn Eltoft. Department of Physics, University of Tromsø, 9037 Tromsø, Norway

Supervised Learning: Linear Methods (1/2) Applied Multivariate Statistics Spring 2012

Recognition Performance from SAR Imagery Subject to System Resource Constraints

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

ECE 592 Topics in Data Science

Dr.N.Chandrasekar Professor and Head Centre for GeoTechnology Manonmaniam Sundaranar University Tirunelveli

Waveform-Based Coding: Outline

GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

Computer Vision Group Prof. Daniel Cremers. 3. Regression

Active learning in sequence labeling

Kernel Density Topic Models: Visual Topics Without Visual Words

SYMBOL RECOGNITION IN HANDWRITTEN MATHEMATI- CAL FORMULAS

Tento projekt je spolufinancován Evropským sociálním fondem a Státním rozpočtem ČR InoBio CZ.1.07/2.2.00/

THE Discrete Wavelet Transform (DWT) is a spatialfrequency

Modeling Complex Temporal Composition of Actionlets for Activity Prediction

Feature extraction: Corners and blobs

Salt Dome Detection and Tracking Using Texture Analysis and Tensor-based Subspace Learning

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

Instance-level l recognition. Cordelia Schmid INRIA

A Generative Score Space for Statistical Dialog Characterization in Social Signalling

Singer Identification using MFCC and LPC and its comparison for ANN and Naïve Bayes Classifiers

Transcription:

Segmentace textury Případová studie Jan Kybic

Zadání Mikroskopický obrázek segmentujte do tříd: Příčná vlákna Podélná vlákna Matrice Trhliny

Zvolená metoda Deskriptorový popis Učení s učitelem ML klasifikátor

Masky tříd Ground truth Ruční segmentace

Deskriptory P1: Hodnota šedé µ = f G σ σ = 15 P2: Lokální energie E = (f µ) 2 G σ P3: Směrovost E h E v E h + E v E h = ( x f ) 2 G σ E v = ( y f ) 2 G σ

Deskriptory (2) motivace P1 (hodnota šedé): odliší matice od trhlin P2 (lokální energie): odliší vlákna od ostatních tříd P3 (směrovost): odliší orientace vláken

Deskriptory (3) histogramy Attribute 0 Attribute 1 200000 180000 class 0 class 1 class 2 class 3 350000 300000 class 0 class 1 class 2 class 3 160000 140000 250000 120000 200000 100000 80000 150000 60000 100000 40000 50000 20000 0 0 50 100 150 200 250 0 0 1000 2000 3000 4000 5000 6000 90000 80000 Attribute 2 class 0 class 1 class 2 class 3 70000 60000 50000 40000 30000 20000 10000 0-1 -0.8-0.6-0.4-0.2 0 0.2 0.4 0.6 0.8 1

Statistický popis Pravděpodobnost p(x T i ) vektoru deskriptorů v závislosti na třídě považujeme za normální. Složky vektoru deskriptorů považujeme za nezávislé = jednoduchý popis průměrem x i a standardní odchylkou σ i

Klasifikace Předpokládáme stejnou apriorní pravděpodobnost tříd p(t j ). Pak je ML klasifikace neznámého pixelu s deskriptorovým vektorem x vyjádřitelná jako: j ML = arg min (x x j )/σ j j kde operace / je brána po elementech.

Výsledky

Možná vylepšení Větší trénovací množina. Větší počet deskriptorů a optimalizace jejich parametrů na základě výsledků klasifikace. Úpravy výpočtu deskriptorů, aby postačovalo menší okoĺı a výsledky nebyly ovlivněny kraji obrazu a okraji textur. Zavedení třídy pro nevím. Lepší modelování sdružených pravděpodobností, opuštění požadavku nezávislosti a normálnosti. Obecnější návrh klasifikátoru, neparametrické klasifikátory. Použití apriorní informace ze sousedství pixelů pomocí např. markovských náhodných poĺı. Iterativní segmentace, kdy výpočet deskriptorů je řízen výsledky klasifikace z předchozí iterace.

Texture classification using Haralick descriptors Task: for a texture patch, find a class Training and test images Calculate descriptors (features) for a patch Feature selection Classification

Co-occurrence matrices P 0,d(a, b) = { [(k, l), (m, n)] D : k m = 0, l n = d, f (k, l) = a, f (m, n) = b } P 45,d(a, b) = { [(k, l), (m, n)] D : (k m = d, l n = d) (k m = d, l n = d), f (k, l) = a, f (m, n) = b } P 90,d(a, b) = { [(k, l), (m, n)] D : k m = d, l n = 0, f (k, l) = a, f (m, n) = b } P 135,d(a, b) = { [(k, l), (m, n)] D : (k m = d, l n = d) (k m = d, l n = d), f (k, l) = a, f (m, n) = b },

Haralick descriptors Energy Entropy Pφ,d(a, 2 b). a,b P φ,d (a, b) log 2 P φ,d (a, b). a,b Contrast (typically κ = 2, λ = 1) a b κ Pφ,d(a, λ b). Inverse difference moment (homogeneity) a,b a,b;a b Pφ,d λ (a, b) a b κ. Correlation [ a,b (ab)pφ,d (a, b) ] µ x µ y, σ x σ y where µ x, µ y are means and σ x, σ y are standard deviations

Preprocessing and classification Distances d { 1... 10 } Angles φ { 0, 45, 90, 135 } (consider symmetry) 5 Haralick descriptors 202 features (d = 0 is treated specially) Normalized to zero mean and unit standard deviation Select 10 best features (ratio of intra-class / inter-class variance) Find Gaussian parameters (µ, Σ) for each class Maximum probability normal classifier

Brodatz texture 10 classes Cut into 36 non-overlapping patches 100 100 pixels. Randomly divide into training & test sets.

Results Confusion matrix Row i, column j number of patches from class j classified as i. 1 2 3 4 5 6 7 8 9 10 1 21 0 0 0 0 0 0 0 2 0 2 0 16 0 0 0 0 4 0 0 0 3 0 0 11 0 0 0 0 0 0 0 4 0 0 0 18 0 0 0 0 0 0 5 0 0 0 0 12 0 0 0 0 0 6 0 0 0 0 0 15 0 0 0 0 7 0 1 0 0 0 0 16 0 0 0 8 0 0 0 0 0 0 0 22 0 0 9 0 0 0 0 0 0 2 0 17 0 10 0 0 4 0 0 0 0 0 0 19 Total classification accuracy 94%.

Texture classification using Wavelet descriptors Non-decimated wavelet coefficients (Unser, 1989), discrete wavelet frame (DWF) Descriptors = Wavelet energy signature f = [ s L 2, d L 2, d L 1 2,... d 1 2] s i (l) = 2 i/2 h i (k 2 i l) x(l) d i (l) = 2 i/2 g i (k 2 i l) x(l) 10 descriptors, feature selection not needed

Calculating wavelet frames Two-scale relation recursive calculation (fast) s i+1 (k) = [h] 2 i s i (k) d i+1 (k) = [g] 2 i s i (k) Haar filters H(z) = (1 + z)/2 h = 1 [1 1] 2 G(z) = (z 1)/2 g = 1 [1 1] 2 2D extension At each level, create 4 sub-bands by filtering with H x H y, H x G y, G x H y, G x G y. Separability only 6 1D filterings Low-pass band H x H y used as input to the next level

Results Confusion matrix Row i, column j number of patches from class j classified as i. 1 2 3 4 5 6 7 8 9 10 1 20 0 0 0 0 0 0 0 1 0 2 0 17 0 0 0 0 0 0 0 0 3 0 0 15 0 0 0 0 0 0 0 4 0 0 0 18 0 0 0 0 0 0 5 0 0 0 0 12 0 0 0 0 0 6 0 0 0 0 0 15 0 0 0 0 7 0 0 0 0 0 0 22 0 0 0 8 0 0 0 0 0 0 0 22 3 0 9 1 0 0 0 0 0 0 0 15 0 10 0 0 0 0 0 0 0 0 0 19 Total classification accuracy 97%.

Texture segmentation using Wavelet descriptors Task: segment image into regions based on texture Wavelet signature over a small window descriptor vector for each pixel Training image pdf for each class Segmentation algorithm (region growing, GraphCut) most likely class + spatial smoothness

Example 4 4 4 3.5 3.5 3.5 3 3 3 2.5 2.5 2.5 2 2 2 1.5 1.5 1.5 1 training image 1 test image 1 result