CS 556: Computer Vision. Lecture 21

Similar documents
CS 556: Computer Vision. Lecture 13

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University

CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation

ECE 468: Digital Image Processing. Lecture 8

Non-parametric Methods

(a) Write down the value of q and of r. (2) Write down the equation of the axis of symmetry. (1) (c) Find the value of p. (3) (Total 6 marks)

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model

Maximum Likelihood Estimation. only training data is available to design a classifier

Edge Detection. CS 650: Computer Vision

Machine Learning for Data Science (CS4786) Lecture 12

A. Evaluate log Evaluate Logarithms

Machine vision. Summary # 4. The mask for Laplacian is given

AUTOMOTIVE ENVIRONMENT SENSORS

Machine vision, spring 2018 Summary 4

Modelling Non-linear and Non-stationary Time Series

K-Means, Expectation Maximization and Segmentation. D.A. Forsyth, CS543

Density Estimation. Seungjin Choi

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis

12 - Nonparametric Density Estimation

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods

Nonparametric Methods Lecture 5

Clustering. Léon Bottou COS 424 3/4/2010. NEC Labs America

Lecture Notes on the Gaussian Distribution

Stochastic Modeling of Tropical Cyclone Track Data

Mixture Models and EM

Information geometry for bivariate distribution control

COMS 4771 Lecture Course overview 2. Maximum likelihood estimation (review of some statistics)

Statistical Rock Physics

Solve the problem. Determine the center and radius of the circle. Use the given information about a circle to find its equation.

Chapter 9. Non-Parametric Density Function Estimation

CS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings

Gaussian mean-shift algorithms

UMUC MATH-107 Final Exam Information

Representer theorem and kernel examples

Midterm Exam. CS283, Computer Vision Harvard University. Nov. 20, 2009

Lecture 9 4.1: Derivative Rules MTH 124

Non-parametric Methods

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Spatial Enhancement Region operations: k'(x,y) = F( k(x-m, y-n), k(x,y), k(x+m,y+n) ]

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas

8th Grade Math Definitions

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

LECTURE NOTE #3 PROF. ALAN YUILLE

2D Image Processing (Extended) Kalman and particle filter

Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Quadratics. SPTA Mathematics Higher Notes

HORIZONTAL AND VERTICAL TRANSLATIONS

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Introduction to Computer Vision. 2D Linear Systems

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Lecture 6: Gaussian Mixture Models (GMM)

The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space

Minimum Hellinger Distance Estimation in a. Semiparametric Mixture Model

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES

DISCRIMINANT EXAM QUESTIONS

Lecture 4: Probabilistic Learning

Pattern Recognition. Parameter Estimation of Probability Density Functions

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

COMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017

CHAPTER 2 POLYNOMIALS KEY POINTS

Introduction to Computer Vision

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Probabilistic Machine Learning

Kernel Methods in Machine Learning

March 13, Paper: R.R. Coifman, S. Lafon, Diffusion maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University

Probability Background

Parameter Estimation. Industrial AI Lab.

Additional Pointers. Introduction to Computer Vision. Convolution. Area operations: Linear filtering

Shape of Gaussians as Feature Descriptors

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu

Mathematical Formulation of Our Example

King Fahd University of Petroleum and Minerals Prep-Year Math Program Math Term 161 Recitation (R1, R2)

Properties of Continuous Probability Distributions The graph of a continuous probability distribution is a curve. Probability is represented by area

A Generative Model Based Kernel for SVM Classification in Multimedia Applications

The Gaussian distribution

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

We name Functions f (x) or g(x) etc.

Analysis methods of heavy-tailed data

Fisher Vector image representation

Gaussian Processes (10/16/13)

Improved Fast Gauss Transform. Fast Gauss Transform (FGT)

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

MTTTS16 Learning from Multiple Sources

Advanced Edge Detection 1

Function: exactly Functions as equations:

Analytic Geometry and Calculus I Exam 1 Practice Problems Solutions 2/19/7

Lecture 5: GPs and Streaming regression

22 : Hilbert Space Embeddings of Distributions

Beyond the Point Cloud: From Transductive to Semi-Supervised Learning

BSM510 Numerical Analysis

19 : Slice Sampling and HMC

Section 0.2 & 0.3 Worksheet. Types of Functions

Robot Autonomy - Final Report. Spring

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Data Mining - SVM. Dr. Jean-Michel RICHER Dr. Jean-Michel RICHER Data Mining - SVM 1 / 55

Probabilistic Machine Learning. Industrial AI Lab.

Transcription:

CS 556: Computer Vision Lecture 21 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu 1

Meanshift 2

Meanshift Clustering Assumption: There is an underlying pdf governing data properties in R Clustering based on estimating the underling pdf Each cluster corresponds to one mode of the pdf assumed pdf real data samples 3

Meanshift Clustering Assumption: There is an underlying pdf governing data properties in R Clustering based on estimating the underling pdf Each cluster corresponds to one mode of the pdf assumed pdf real data samples 3

Meanshift Clustering Assumption: There is an underlying pdf governing data properties in R Clustering based on estimating the underling pdf Each cluster corresponds to one mode of the pdf assumed pdf real data samples 3

Meanshift Clustering Assumption: There is an underlying pdf governing data properties in R Clustering based on estimating the underling pdf Each cluster corresponds to one mode of the pdf assumed pdf real data samples 3

Assumption: Parametric Density Estimation The functional form of the pdf is known Example: Mixture of Gaussians K k=1 C k exp 1 2 (x µ k) T 1 k (x µ k) assumed pdf real data samples 4

Assumption: Parametric Density Estimation The functional form of the pdf is known estimate Example: Mixture of Gaussians K k=1 C k exp 1 2 (x µ k) T 1 k (x µ k) assumed pdf real data samples 4

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

onparameteric Density Estimation Given data samples Estimate the pdf, P(x), at some value x by Counting data samples, xn, around x P (x) = 1 K(x x n ) data Data kernel 5

Common Kernels Epanechnikov K E (x) = c(1 x 2 ), x 1 0, otherwise Gaussian K G (x) = exp 1 2 x 2 6

Meanshift Clustering Goal = Identify modes o need to estimate the pdf Meanshift: Estimate the gradient of the pdf Cluster data that are in the attraction basin of a mode 7

Meanshift Clustering 8

Meanshift Clustering Region of interest 9

Meanshift Clustering Region of interest Center of mass 10

Meanshift Clustering Region of interest Center of mass Mean Shift vector 11

Meanshift Clustering Region of interest Center of mass 12

Meanshift Clustering Region of interest Center of mass 12

Meanshift Clustering Region of interest 12

Meanshift Clustering Region of interest 13

Meanshift Clustering Region of interest Center of mass 13

Meanshift Clustering Region of interest Center of mass Mean Shift vector 13

Kernel Density Gradient Estimation P (x) = 1 K(x x n ) P (x) = 1 K(x x n ) 14

Kernel Density Gradient Estimation P (x) = 1 K(x x n ) P (x) = 1 K(x x n ) K(x x n )=f x x n h 2 K(x x n ) = g n (x n x) g(x) = f(x) 14

Kernel Density Gradient Estimation P (x) = 1 K(x x n ) P (x) = 1 K(x x n ) K(x x n )=f x x n h 2 K(x x n ) = g n (x n x) g(x) = f(x) rp (x) = 1 X g n (x n x) = 1 g n g n x n g n x 14

Meanshift -- Algorithm Steps meanshift vector m(x) {z } P (x) = 1 g n g n x n g n x 1. Compute the meanshift vector m(x) 2. Translate the kernel window by m(x) 3. Repeat steps 1-2 until convergence 15

Meanshift -- Algorithm Steps meanshift vector m(x) P (x) = 1 g n g n x n g n x 1. Compute the meanshift vector m(x) 2. Translate the kernel window by m(x) 3. Repeat steps 1-2 until convergence 16

Meanshift -- Algorithm Steps meanshift vector m(x) P (x) = 1 g n g n x n g n x 1. Compute the meanshift vector m(x) 2. Translate the kernel window by m(x) 3. Repeat steps 1-2 until convergence 16

Experimental Results -- Meanshift Clustering input data colors denote distinct clusters 17

Results -- Meanshift Clustering input image pixel values in LUV color space 18

Results -- Meanshift Clustering (continued) input only LU final result trajectories in attraction basins 19

Results -- Meanshift Segmentation 20

Results -- Comparison Meanshift vs cuts input cuts Meanshift 21