Nonparametric Methods Lecture 5

Similar documents
probability of k samples out of J fall in R.

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Parametric Techniques

Parametric Techniques Lecture 3

Supervised Learning: Non-parametric Estimation

Nonparametric probability density estimation

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart

Curve Fitting Re-visited, Bishop1.2.5

BAYESIAN DECISION THEORY

p(d θ ) l(θ ) 1.2 x x x

Maximum Likelihood Estimation. only training data is available to design a classifier

Minimum Error-Rate Discriminant

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Lecture Notes on the Gaussian Distribution

L11: Pattern recognition principles

Kernel-based density. Nuno Vasconcelos ECE Department, UCSD

Introduction to Machine Learning

Machine Learning 2017

CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation

12 - Nonparametric Density Estimation

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

p(x ω i 0.4 ω 2 ω

Machine Learning for Signal Processing Bayes Classification and Regression

Probability Models for Bayesian Recognition

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Density Estimation. Seungjin Choi

p(x ω i 0.4 ω 2 ω

Chapter 9. Non-Parametric Density Function Estimation

Non-parametric Methods

COM336: Neural Computing

Classification: The rest of the story

Algorithm Independent Topics Lecture 6

Statistics: Learning models from data

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians

Machine Learning. Theory of Classification and Nonparametric Classifier. Lecture 2, January 16, What is theoretically the best classifier

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Learning Methods for Linear Detectors

Machine Learning Lecture 3

Dimension Reduction and Component Analysis Lecture 4

Bayesian Decision Theory Lecture 2

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

Nonparametric Density Estimation. October 1, 2018

Dimension Reduction and Component Analysis

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones

A NOTE ON THE CHOICE OF THE SMOOTHING PARAMETER IN THE KERNEL DENSITY ESTIMATE

Algorithm-Independent Learning Issues

Chapter 9. Non-Parametric Density Function Estimation

Non-parametric Classification of Facial Features

The Gaussian distribution

CS 195-5: Machine Learning Problem Set 1

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart

Bayesian Classifiers and Probability Estimation. Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington

PATTERN RECOGNITION AND MACHINE LEARNING

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians

Pattern Recognition 2

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

O Combining cross-validation and plug-in methods - for kernel density bandwidth selection O

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

CENG 793. On Machine Learning and Optimization. Sinan Kalkan

Expectation Maximization

Generative modeling of data. Instructor: Taylor Berg-Kirkpatrick Slides: Sanjoy Dasgupta

Hidden Markov Models Part 1: Introduction

Lecture 3: Pattern Classification

The Bayes classifier

Pattern Recognition Problem. Pattern Recognition Problems. Pattern Recognition Problems. Pattern Recognition: OCR. Pattern Recognition Books

Intro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation

Conditional Random Field

Intelligent Systems:

Nonparametric Methods

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Announcements. Proposals graded

Density Estimation (II)

Probabilistic Graphical Models

Nonparametric Bayesian Methods (Gaussian Processes)

Lecture : Probabilistic Machine Learning

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA

Chap 1. Overview of Statistical Learning (HTF, , 2.9) Yongdai Kim Seoul National University

Bayesian Decision and Bayesian Learning

Lecture 4: Probabilistic Learning

Bayes Decision Theory

Undirected graphical models

Intelligent Systems (AI-2)

An Introduction to Machine Learning

Econ 582 Nonparametric Regression

Naïve Bayes classification

Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis

Advanced statistical methods for data analysis Lecture 2

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Joint Optimization of Segmentation and Appearance Models

CMU-Q Lecture 24:

Econometrics I. Lecture 10: Nonparametric Estimation with Kernels. Paul T. Scott NYU Stern. Fall 2018

Bayesian Support Vector Machines for Feature Ranking and Selection

Lecture 7: DecisionTrees

Variational Inference (11/04/13)

Logistic Regression: Online, Lazy, Kernelized, Sequential, etc.

Transcription:

Nonparametric Methods Lecture 5 Jason Corso SUNY at Buffalo 17 Feb. 29 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49

Nonparametric Methods Lecture 5 Overview Previously, we ve assumed that the forms of the underlying densities were of some particular known parametric form. But, what if this is not the case? Indeed, for most real-world pattern recognition scenarios this assumption is suspect. For example, most real-world entities have multimodal distributions whereas all classical parametric densities are unimodal. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 2 / 49

Nonparametric Methods Lecture 5 Overview Previously, we ve assumed that the forms of the underlying densities were of some particular known parametric form. But, what if this is not the case? Indeed, for most real-world pattern recognition scenarios this assumption is suspect. For example, most real-world entities have multimodal distributions whereas all classical parametric densities are unimodal. We will examine nonparametric procedures that can be used with arbitrary distributions and without the assumption that the underlying form of the densities are known. Histograms. Kernel Density Estimation / Parzen Windows. k-nearest Neighbor Density Estimation. Real Example in Figure-Ground Segmentation J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 2 / 49

Histograms Histograms p(x,y ) Y = 2 Y = 1 X J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 3 / 49

Histograms Histograms p(x,y ) p(x) Y = 2 Y = 1 X X J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 3 / 49

Histograms Histograms p(x,y ) p(x) Y = 2 Y = 1 X p(y ) X J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 3 / 49

Histograms Histograms p(x,y ) p(x) Y = 2 Y = 1 X p(y ) X p(x Y = 1) X J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 3 / 49

Histograms Histogram Density Representation Consider a single continuous variable x and let s say we have a set D of N of them {x 1,..., x N }. Our goal is to model p(x) from D. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 4 / 49

Histograms Histogram Density Representation Consider a single continuous variable x and let s say we have a set D of N of them {x 1,..., x N }. Our goal is to model p(x) from D. Standard histograms simply partition x into distinct bins of width i and then count the number n i of observations x falling into bin i. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 4 / 49

Histograms Histogram Density Representation Consider a single continuous variable x and let s say we have a set D of N of them {x 1,..., x N }. Our goal is to model p(x) from D. Standard histograms simply partition x into distinct bins of width i and then count the number n i of observations x falling into bin i. To turn this count into a normalized probability density, we simply divide by the total number of observations N and by the width i of the bins. This gives us: p i = n i N i (1) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 4 / 49

Histograms Histogram Density Representation Consider a single continuous variable x and let s say we have a set D of N of them {x 1,..., x N }. Our goal is to model p(x) from D. Standard histograms simply partition x into distinct bins of width i and then count the number n i of observations x falling into bin i. To turn this count into a normalized probability density, we simply divide by the total number of observations N and by the width i of the bins. This gives us: p i = n i N i (1) Hence the model for the density p(x) is constant over the width of each bin. (And often the bins are chosen to have the same width i =.) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 4 / 49

Histograms Bin Number 1 2 Bin Count 3 6 7 Δ J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 5 / 49

Histograms Histogram Density as a Function of Bin Width 5 =.4.5 1 5 =.8.5 1 5 =.25.5 1 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 6 / 49

Histograms Histogram Density as a Function of Bin Width The green curve is the underlying true density from which the samples were drawn. It is a mixture of two Gaussians. 5 =.4.5 1 5 =.8.5 1 5 =.25.5 1 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 7 / 49

Histograms Histogram Density as a Function of Bin Width The green curve is the underlying true density from which the samples were drawn. It is a mixture of two Gaussians. When is very small (top), the resulting density is quite spiky and hallucinates a lot of structure not present in p(x). 5 =.4.5 1 5 =.8.5 1 5 =.25.5 1 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 7 / 49

Histograms Histogram Density as a Function of Bin Width The green curve is the underlying true density from which the samples were drawn. It is a mixture of two Gaussians. When is very small (top), the resulting density is quite spiky and hallucinates a lot of structure not present in p(x). 5 =.4.5 1 5 =.8.5 1 5 =.25.5 1 When is very big (bottom), the resulting density is quite smooth and consequently fails to capture the bimodality of p(x). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 7 / 49

Histograms Histogram Density as a Function of Bin Width The green curve is the underlying true density from which the samples were drawn. It is a mixture of two Gaussians. When is very small (top), the resulting density is quite spiky and hallucinates a lot of structure not present in p(x). 5 =.4.5 1 5 =.8.5 1 5 =.25.5 1 When is very big (bottom), the resulting density is quite smooth and consequently fails to capture the bimodality of p(x). It appears that the best results are obtained for some intermediate value of, which is given in the middle figure. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 7 / 49

Histograms Histogram Density as a Function of Bin Width The green curve is the underlying true density from which the samples were drawn. It is a mixture of two Gaussians. When is very small (top), the resulting density is quite spiky and hallucinates a lot of structure not present in p(x). 5 =.4.5 1 5 =.8.5 1 5 =.25.5 1 When is very big (bottom), the resulting density is quite smooth and consequently fails to capture the bimodality of p(x). It appears that the best results are obtained for some intermediate value of, which is given in the middle figure. In principle, a histogram density model is also dependent on the choice of the edge location of each bin. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 7 / 49

Histograms Analyzing the Histogram Density What are the advantages and disadvantages of the histogram density estimator? J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 8 / 49

Histograms Analyzing the Histogram Density What are the advantages and disadvantages of the histogram density estimator? Advantages: Simple to evaluate and simple to use. One can throw away D once the histogram is computed. Can be computed sequentially if data continues to come in. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 8 / 49

Histograms Analyzing the Histogram Density What are the advantages and disadvantages of the histogram density estimator? Advantages: Simple to evaluate and simple to use. One can throw away D once the histogram is computed. Can be computed sequentially if data continues to come in. Disadvantages: The estimated density has discontinuities due to the bin edges rather than any property of the underlying density. Scales poorly (curse of dimensionality): we would have M D bins if we divided each variable in a D-dimensional space into M bins. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 8 / 49

Histograms What can we learn from Histogram Density Estimation? Lesson 1: To estimate the probability density at a particular location, we should consider the data points that lie within some local neighborhood of that point. This requires we define some distance measure. There is a natural smoothness parameter describing the spatial extent of the regions (this was the bin width for the histograms). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 9 / 49

Histograms What can we learn from Histogram Density Estimation? Lesson 1: To estimate the probability density at a particular location, we should consider the data points that lie within some local neighborhood of that point. This requires we define some distance measure. There is a natural smoothness parameter describing the spatial extent of the regions (this was the bin width for the histograms). Lesson 2: The value of the smoothing parameter should neither be too large or too small in order to obtain good results. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 9 / 49

Histograms What can we learn from Histogram Density Estimation? Lesson 1: To estimate the probability density at a particular location, we should consider the data points that lie within some local neighborhood of that point. This requires we define some distance measure. There is a natural smoothness parameter describing the spatial extent of the regions (this was the bin width for the histograms). Lesson 2: The value of the smoothing parameter should neither be too large or too small in order to obtain good results. With these two lessons in mind, we proceed to kernel density estimation and nearest neighbor density estimation, two closely related methods for density estimation. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 9 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density Consider again samples x from underlying density p(x). Let R denote a small region containing x. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density Consider again samples x from underlying density p(x). Let R denote a small region containing x. The probability mass associated with R is given by P = p(x )dx (2) R J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density Consider again samples x from underlying density p(x). Let R denote a small region containing x. The probability mass associated with R is given by P = p(x )dx (2) R Suppose we have n samples x D. The probability of each sample falling into R is P. How will the total number of k points falling into R be distributed? J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density Consider again samples x from underlying density p(x). Let R denote a small region containing x. The probability mass associated with R is given by P = p(x )dx (2) R Suppose we have n samples x D. The probability of each sample falling into R is P. How will the total number of k points falling into R be distributed? This will be a binomial distribution: P k = ( ) n P k (1 P ) n k (3) k J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density The expected value for k is thus E[k] = np (4) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 11 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density The expected value for k is thus E[k] = np (4) The binomial for k peaks very sharply about the mean. So, we expect k/n to be a very good estimate for the probability P (and thus for the space-averaged density). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 11 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density The expected value for k is thus E[k] = np (4) The binomial for k peaks very sharply about the mean. So, we expect k/n to be a very good estimate for the probability P (and thus for the space-averaged density). This estimate is increasingly accurate as n increases. relative probability 1.5 2 5 1 P =.7 FIGURE 4.1. The relative probability an estimate given by Eq. 4 will yield a particular J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 11 / 49 1 k/n

Kernel Density Estimation The Space-Averaged / Smoothed Density Assuming continuous p(x) and that R is so small that p(x) does not appreciably vary within it, we can write: p(x )dx p(x)v (5) R where x is a point within R and V is the volume enclosed by R. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 12 / 49

Kernel Density Estimation The Space-Averaged / Smoothed Density Assuming continuous p(x) and that R is so small that p(x) does not appreciably vary within it, we can write: p(x )dx p(x)v (5) R where x is a point within R and V is the volume enclosed by R. After some rearranging, we get the following estimate for p(x) p(x) k nv (6) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 12 / 49

Example Kernel Density Estimation Simulated an example of example the density at.5 for an underlying zero-mean, unit variance Gaussian. Varied the volume used to estimate the density. Red=1, Green=2, Blue=3, Yellow=4, Black=5. 1 x is.5 and p(x) is.35265 1 x is.5 and p(x) is.35265.9.9.8.8.7.7.6.6.5.4.3.2.1.2.3.4.5.5.4.3.2.1.2.3.4.5 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 13 / 49

Practical Concerns Kernel Density Estimation Practical Concerns The validity of our estimate depends on two contradictory assumptions: 1 The region R must be sufficiently small the the density is approximately constant over the region. 2 The region R must be sufficiently large that the number k of points falling inside it is sufficient to yield a sharply peaked binomial. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 14 / 49

Practical Concerns Kernel Density Estimation Practical Concerns The validity of our estimate depends on two contradictory assumptions: 1 The region R must be sufficiently small the the density is approximately constant over the region. 2 The region R must be sufficiently large that the number k of points falling inside it is sufficient to yield a sharply peaked binomial. Another way of looking it is to fix the volume V and increase the number of training samples. Then, the ratio k/n will converge as desired. But, this will only yield an estimate of the space-averaged density (P/V ). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 14 / 49

Practical Concerns Kernel Density Estimation Practical Concerns The validity of our estimate depends on two contradictory assumptions: 1 The region R must be sufficiently small the the density is approximately constant over the region. 2 The region R must be sufficiently large that the number k of points falling inside it is sufficient to yield a sharply peaked binomial. Another way of looking it is to fix the volume V and increase the number of training samples. Then, the ratio k/n will converge as desired. But, this will only yield an estimate of the space-averaged density (P/V ). We want p(x), so we need to let V approach. However, with a fixed n, R will become so small, that no points will fall into it and our estimate would be useless: p(x). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 14 / 49

Practical Concerns Kernel Density Estimation Practical Concerns The validity of our estimate depends on two contradictory assumptions: 1 The region R must be sufficiently small the the density is approximately constant over the region. 2 The region R must be sufficiently large that the number k of points falling inside it is sufficient to yield a sharply peaked binomial. Another way of looking it is to fix the volume V and increase the number of training samples. Then, the ratio k/n will converge as desired. But, this will only yield an estimate of the space-averaged density (P/V ). We want p(x), so we need to let V approach. However, with a fixed n, R will become so small, that no points will fall into it and our estimate would be useless: p(x). Note that in practice, we cannot let V to become arbitrarily small because the number of samples is always limited. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 14 / 49

Kernel Density Estimation Practical Concerns How can we skirt these limitations when an unlimited number of samples if available? To estimate the density at x, form a sequence of regions R 1, R 2,... containing x with the R 1 having 1 sample, R 2 having 2 samples and so on. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 15 / 49

Kernel Density Estimation Practical Concerns How can we skirt these limitations when an unlimited number of samples if available? To estimate the density at x, form a sequence of regions R 1, R 2,... containing x with the R 1 having 1 sample, R 2 having 2 samples and so on. Let V n be the volume of R n, k n be the number of samples falling in R n, and p n (x) be the nth estimate for p(x): p n (x) = k n nv n (7) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 15 / 49

Kernel Density Estimation Practical Concerns How can we skirt these limitations when an unlimited number of samples if available? To estimate the density at x, form a sequence of regions R 1, R 2,... containing x with the R 1 having 1 sample, R 2 having 2 samples and so on. Let V n be the volume of R n, k n be the number of samples falling in R n, and p n (x) be the nth estimate for p(x): p n (x) = k n nv n (7) If p n (x) is to converge to p(x) we need the following three conditions lim n = n (8) lim n = n (9) lim n/n = n (1) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 15 / 49

Kernel Density Estimation Practical Concerns lim n V n = ensures that our space-averaged density will converge to p(x). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 16 / 49

Kernel Density Estimation Practical Concerns lim n V n = ensures that our space-averaged density will converge to p(x). lim n k n = basically ensures that the frequency ratio will converge to the probability P (the binomial will be sufficiently peaked). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 16 / 49

Kernel Density Estimation Practical Concerns lim n V n = ensures that our space-averaged density will converge to p(x). lim n k n = basically ensures that the frequency ratio will converge to the probability P (the binomial will be sufficiently peaked). lim n k n /n = is required for p n (x) to converge at all. It also says that although a huge number of samples will fall within the region R n, they will form a negligibly small fraction of the total number of samples. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 16 / 49

Kernel Density Estimation Practical Concerns lim n V n = ensures that our space-averaged density will converge to p(x). lim n k n = basically ensures that the frequency ratio will converge to the probability P (the binomial will be sufficiently peaked). lim n k n /n = is required for p n (x) to converge at all. It also says that although a huge number of samples will fall within the region R n, they will form a negligibly small fraction of the total number of samples. There are two common ways of obtaining regions that satisfy these conditions: J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 16 / 49

Kernel Density Estimation Practical Concerns lim n V n = ensures that our space-averaged density will converge to p(x). lim n k n = basically ensures that the frequency ratio will converge to the probability P (the binomial will be sufficiently peaked). lim n k n /n = is required for p n (x) to converge at all. It also says that although a huge number of samples will fall within the region R n, they will form a negligibly small fraction of the total number of samples. There are two common ways of obtaining regions that satisfy these conditions: 1 Shrink an initial region by specifying the volume V n as some function of n such as V n = 1/ n. Then, we need to show that p n (x) converges to p(x). (This is like the Parzen window we ll talk about next.) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 16 / 49

Kernel Density Estimation Practical Concerns lim n V n = ensures that our space-averaged density will converge to p(x). lim n k n = basically ensures that the frequency ratio will converge to the probability P (the binomial will be sufficiently peaked). lim n k n /n = is required for p n (x) to converge at all. It also says that although a huge number of samples will fall within the region R n, they will form a negligibly small fraction of the total number of samples. There are two common ways of obtaining regions that satisfy these conditions: 1 Shrink an initial region by specifying the volume V n as some function of n such as V n = 1/ n. Then, we need to show that p n (x) converges to p(x). (This is like the Parzen window we ll talk about next.) 2 Specify k n as some function of n such as k n = n. Then, we grow the volume V n until it encloses k n neighbors of x. (This is the k-nearest-neighbor). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 16 / 49

Kernel Density Estimation Practical Concerns lim n V n = ensures that our space-averaged density will converge to p(x). lim n k n = basically ensures that the frequency ratio will converge to the probability P (the binomial will be sufficiently peaked). lim n k n /n = is required for p n (x) to converge at all. It also says that although a huge number of samples will fall within the region R n, they will form a negligibly small fraction of the total number of samples. There are two common ways of obtaining regions that satisfy these conditions: 1 Shrink an initial region by specifying the volume V n as some function of n such as V n = 1/ n. Then, we need to show that p n (x) converges to p(x). (This is like the Parzen window we ll talk about next.) 2 Specify k n as some function of n such as k n = n. Then, we grow the volume V n until it encloses k n neighbors of x. (This is the k-nearest-neighbor). Both of these methods converge... J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 16 / 49

Kernel Density Estimation Practical Concerns FIGURE 4.2. There are two leading methods for estimating the density at a point, here at the center of each square. The one shown in the top row is to start with a large volume centered on the test point and shrink it according to a function such as V n = 1/ n.the other method, shown in the bottom row, is to decrease the volume in a data-dependent way, for instance letting the volume enclose some number k n = n of sample points. The sequences in both cases represent random variables that generally converge and allow the true density at the test point to be calculated. From: Richard O. Duda, Peter E. J. Hart, Corso and (SUNY David at Buffalo) G. Stork, Pattern Nonparametric Classification. Methods Lecture Copyright 5 c 21 by 17 Feb. John 29 Wiley 17 &/ 49 n = 1 n = 4 n = 9 n = 16 n = 1 V n =1/ n...... k n = n......

Parzen Windows Kernel Density Estimation Parzen Windows Let s temporarily assume the region R is a d-dimensional hypercube with h n being the length of an edge. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 18 / 49

Parzen Windows Kernel Density Estimation Parzen Windows Let s temporarily assume the region R is a d-dimensional hypercube with h n being the length of an edge. The volume of the hypercube is given by V n = h d n. (11) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 18 / 49

Parzen Windows Kernel Density Estimation Parzen Windows Let s temporarily assume the region R is a d-dimensional hypercube with h n being the length of an edge. The volume of the hypercube is given by V n = h d n. (11) We can derive an analytic expression for k n : Define a windowing function: { 1 u j 1/2 j = 1,..., d ϕ(u) = otherwise (12) This windowing function ϕ defines a unit hypercube centered at the origin. Hence, ϕ((x x i )/h n ) is equal to unity if x i falls within the hypercube of volume V n centered at x, and is zero otherwise. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 18 / 49

Kernel Density Estimation Parzen Windows The number of samples in this hypercube is therefore given by k n = n ( ) x xi ϕ i=1 h n. (13) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 19 / 49

Kernel Density Estimation Parzen Windows The number of samples in this hypercube is therefore given by k n = n ( ) x xi ϕ i=1 h n. (13) Substituting in equation (7), p n (x) = k n /(nv n ) yields the estimate p n (x) = 1 n n ( ) 1 x xi ϕ V n i=1 h n. (14) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 19 / 49

Kernel Density Estimation Parzen Windows The number of samples in this hypercube is therefore given by k n = n ( ) x xi ϕ i=1 h n. (13) Substituting in equation (7), p n (x) = k n /(nv n ) yields the estimate p n (x) = 1 n n ( ) 1 x xi ϕ V n i=1 h n. (14) Hence, the windowing function ϕ, in this context called a Parzen window, tells us how to weight all of the samples in D to determine p(x) at a particular x. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 19 / 49

Example Kernel Density Estimation Parzen Windows 5 =.4 5 h =.5.5 1 5 =.8.5 1 5 =.25.5 1.5 1 5 h =.7.5 1 5 h =.2.5 1 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 2 / 49

Example Kernel Density Estimation Parzen Windows 5 =.4 5 h =.5.5 1 5 =.8.5 1 5 =.25.5 1.5 1 5 h =.7.5 1 5 h =.2.5 1 But, what undesirable trait from histograms are inherited by Parzen window density estimates of the form we ve just defined? J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 2 / 49

Example Kernel Density Estimation Parzen Windows 5 =.4 5 h =.5.5 1 5 =.8.5 1 5 =.25.5 1.5 1 5 h =.7.5 1 5 h =.2.5 1 But, what undesirable trait from histograms are inherited by Parzen window density estimates of the form we ve just defined? Discontinuities... J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 2 / 49

Kernel Density Estimation Parzen Windows Generalizing the Kernel Function What if we allow a more general class of windowing functions rather than the hypercube? If we think of the windowing function as an interpolator, rather than considering the window function about x only, we can visualize it as a kernel sitting on each data sample x i in D. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 21 / 49

Kernel Density Estimation Parzen Windows Generalizing the Kernel Function What if we allow a more general class of windowing functions rather than the hypercube? If we think of the windowing function as an interpolator, rather than considering the window function about x only, we can visualize it as a kernel sitting on each data sample x i in D. And, if we require the following two conditions on the kernel function ϕ, then we can be assured that the resulting density p n (x) will be proper: non-negative and integrate to 1. ϕ(x) (15) ϕ(u)du = 1 (16) For our previous case of V n = h d n, then it follows p n (x) will also satisfy these conditions. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 21 / 49

Kernel Density Estimation Parzen Windows Example: A Univariate Guassian Kernel A popular choice of the kernel is the Gaussian kernel: ϕ h (u) = 1 exp [ 12 ] u2 2π (17) x 1 2 3 4 5 6 7 p(d θ ) The resulting density is given by: 1.2 x 1-7.8 x 1 p(x) -7 = 1 n [ 1 θ ˆ exp 1 ] n h.4 x 1-7 i=1 n 2π 2h 2 (x x i ) 2 (18) n θ It will give us smoother 1 estimates 2 3 without 4 5the 6discontinuites 7 from the hypercube kernel. l(θ ) -2 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 22 / 49

Kernel Density Estimation Effect of the Window Width Slide I Parzen Windows An important question is what effect does the window width h n have on p n (x)? Define δ n (x) as and rewrite p n (x) as the average δ n (x) = 1 ( ) x ϕ V n h n (19) p n (x) = 1 n n δ n (x x i ) (2) i=1 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 23 / 49

Kernel Density Estimation Effect of the Window Width Slide II Parzen Windows h n clearly affects both the amplitude and the width of δ n (x). h = 1 h =.5 h =.2 δ(x).15 δ(x).6 δ(x) 4 3.1.4 2.5 2.2 2 1 2-2 -1 1 1-1 2-2 -2-1 1 1-1 2-2 -2-1 1 1-1 2-2 FIGURE 4.3. Examples of two-dimensional circularly symmetric normal Parzen windows for three different values of h. Note that because the δ(x) are normalized, different vertical scales must be used to show their structure. From: Richard O. Duda, Peter E. Hart, and David G. Stork, Pattern Classification. Copyright c 21 by John Wiley & Sons, Inc. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 24 / 49

Kernel Density Estimation Effect of the Window Width Slide II Parzen Windows h n clearly affects both the amplitude and the width of δ n (x). h = 1 h =.5 h =.2 δ(x).15 δ(x).6 δ(x) 4 3.1.4 2.5 2.2 2 1 2-2 -1 1 1-1 2-2 -2-1 1 1-1 2-2 -2-1 1 1-1 2-2 FIGURE 4.3. h = 1 Examples of two-dimensional h =.5 circularly symmetric normal h =.2Parzen windows for three different values of h. Note that because the δ(x) are normalized, different vertical scales must be used to show their structure. From: Richard O. Duda, Peter E. p(x) p(x) p(x) Hart, and David G. Stork, Pattern Classification. Copyright c 21 by John Wiley &.6.2 8 8 3 8 Sons, Inc..4 2.1 2 4 6 8 1 2 4 6.2 2 4 6 FIGURE 4.4. Three Parzen-window density estimates based on the same set of five samples, using the window J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 24 / 49 8 1 2 4 6 1 2 4 6 8 1 2 4 6

Kernel Density Estimation Parzen Windows Effect of Window Width (And, hence, Volume V n ) But, for any value of h n, the distribution is normalized: ( ) 1 x xi δ(x x i )dx = ϕ dx = ϕ(u)du = 1 (21) V n h n J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 25 / 49

Kernel Density Estimation Parzen Windows Effect of Window Width (And, hence, Volume V n ) But, for any value of h n, the distribution is normalized: ( ) 1 x xi δ(x x i )dx = ϕ dx = ϕ(u)du = 1 (21) V n If V n is too large, the estimate will suffer from too little resolution. h n J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 25 / 49

Kernel Density Estimation Parzen Windows Effect of Window Width (And, hence, Volume V n ) But, for any value of h n, the distribution is normalized: ( ) 1 x xi δ(x x i )dx = ϕ dx = ϕ(u)du = 1 (21) V n If V n is too large, the estimate will suffer from too little resolution. If V n is too small, the estimte will suffer from too much variability. h n J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 25 / 49

Kernel Density Estimation Parzen Windows Effect of Window Width (And, hence, Volume V n ) But, for any value of h n, the distribution is normalized: ( ) 1 x xi δ(x x i )dx = ϕ dx = ϕ(u)du = 1 (21) V n If V n is too large, the estimate will suffer from too little resolution. If V n is too small, the estimte will suffer from too much variability. h n In theory (with an unlimited number of samples), we can let V n slowly approach zero as n increases and then p n (x) will converge to the unknown p(x). But, in practice, we can, at best, seek some compromise. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 25 / 49

Kernel Density Estimation Parzen Windows Example: Revisiting the Univariate Guassian Kernel h 1 = 1 h 1 =.5 h 1 =.1 n = 1-2 2-2 2-2 2 n = 1-2 2-2 2-2 2 n = 1-2 2-2 2-2 2 n = -2 2-2 2-2 2 FIGURE 4.5. Parzen-window estimates of a univariate normal density using different window widths and numbers of samples. The vertical axes have been scaled to best show the structure in each graph. Note particularly that the n = estimates are the J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 26 / 49

Kernel Density Estimation Parzen Windows Example: A Bimodal Distribution 1 h 1 =1 h 1 =.5 h 1 =.2 1 1 n=1 1 2 3 4 1 1 2 3 4 1 1 2 3 4 1 n=16 1 2 3 4 1 1 2 3 4 1 1 2 3 4 1 n=256 1 2 3 4 1 1 2 3 4 1 1 2 3 4 1 n= 1 2 3 4 1 2 3 4 1 2 3 4 FIGURE 4.7. Parzen-window estimates of a bimodal distribution using different window widths and numbers of samples. Note particularly that the n = estimates are the same J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 27 / 49

Kernel Density Estimation Parzen Windows Parzen Window-Based Classifiers Estimate the densities for each category. Classify a query point by the label corresponding to the maximum posterior (i.e., one can include priors). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 28 / 49

Kernel Density Estimation Parzen Windows Parzen Window-Based Classifiers Estimate the densities for each category. Classify a query point by the label corresponding to the maximum posterior (i.e., one can include priors). As you guessed it, the decision regions for a Parzen window-based classifier depend upon the kernel function. x 2 x 2 x 1 x 1 FIGURE 4.8. The decision boundaries in a two-dimensional Parzen-window dichotomizer depend on the window width h. At the left a small h leads to boundaries J. Corso (SUNY at that Buffalo) are more complicated Nonparametric than for large Methods h on same Lecture data 5 set, shown at the right. 17Appar- Feb. 29 28 / 49

Kernel Density Estimation Parzen Windows Parzen Window-Based Classifiers During training, we can made the error arbitrarily low by making the window sufficiently small, but this will have an ill-effect during testing (which is our ultimate need). Think of any possibilities for system rules of choosing the kernel? J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 29 / 49

Kernel Density Estimation Parzen Windows Parzen Window-Based Classifiers During training, we can made the error arbitrarily low by making the window sufficiently small, but this will have an ill-effect during testing (which is our ultimate need). Think of any possibilities for system rules of choosing the kernel? One possibility is to use cross-validation. Break up the data into a training set and a validation set. Then, perform training on the training set with varying bandwidths. Select the bandwidth that minimizes the error on the validation set. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 29 / 49

Kernel Density Estimation Parzen Windows Parzen Window-Based Classifiers During training, we can made the error arbitrarily low by making the window sufficiently small, but this will have an ill-effect during testing (which is our ultimate need). Think of any possibilities for system rules of choosing the kernel? One possibility is to use cross-validation. Break up the data into a training set and a validation set. Then, perform training on the training set with varying bandwidths. Select the bandwidth that minimizes the error on the validation set. There is little theoretical justification for choosing one window width over another. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 29 / 49

Kernel Density Estimation k n Nearest Neighbor Methods k Nearest Neighbors Selecting the best window / bandwidth is a severe limiting factor for Parzen window estimators. k n -NN methods circumvent this problem by making the window size a function of the actual training data. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 3 / 49

Kernel Density Estimation k n Nearest Neighbor Methods k Nearest Neighbors Selecting the best window / bandwidth is a severe limiting factor for Parzen window estimators. k n -NN methods circumvent this problem by making the window size a function of the actual training data. The basic idea here is to center our window around x and let it grow until it capture k n samples, where k n is a function of n. These samples are the k n nearest neighbors of x. If the density is high near x then the window will be relatively small leading to good resolution. If the density is low near x, the window will grow large, but it will stop soon after it enters regions of higher density. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 3 / 49

Kernel Density Estimation k n Nearest Neighbor Methods k Nearest Neighbors Selecting the best window / bandwidth is a severe limiting factor for Parzen window estimators. k n -NN methods circumvent this problem by making the window size a function of the actual training data. The basic idea here is to center our window around x and let it grow until it capture k n samples, where k n is a function of n. These samples are the k n nearest neighbors of x. If the density is high near x then the window will be relatively small leading to good resolution. If the density is low near x, the window will grow large, but it will stop soon after it enters regions of higher density. In either case, we estimate p n (x) according to p n (x) = k n nv n (22) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 3 / 49

Kernel Density Estimation k Nearest Neighbors p n (x) = k n nv n We want k n to go to infinity as n goes to infinity thereby assuring us that k n /n will be a good estimate of the probability that a point will fall in the window of volume V n. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 31 / 49

Kernel Density Estimation k Nearest Neighbors p n (x) = k n nv n We want k n to go to infinity as n goes to infinity thereby assuring us that k n /n will be a good estimate of the probability that a point will fall in the window of volume V n. But, we also want k n to grow sufficiently slowly so that the size of our window will go to zero. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 31 / 49

Kernel Density Estimation k Nearest Neighbors p n (x) = k n nv n We want k n to go to infinity as n goes to infinity thereby assuring us that k n /n will be a good estimate of the probability that a point will fall in the window of volume V n. But, we also want k n to grow sufficiently slowly so that the size of our window will go to zero. Thus, we want k n /n to go to zero. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 31 / 49

Kernel Density Estimation k Nearest Neighbors p n (x) = k n nv n We want k n to go to infinity as n goes to infinity thereby assuring us that k n /n will be a good estimate of the probability that a point will fall in the window of volume V n. But, we also want k n to grow sufficiently slowly so that the size of our window will go to zero. Thus, we want k n /n to go to zero. Recall these conditions from the earlier discussion; these will ensure that p n (x) converges to p(x) as n approaches infinity. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 31 / 49

Kernel Density Estimation Examples of k n -NN Estimation k Nearest Neighbors Notice the discontinuities in the slopes of the estimate. p(x) p(x) x 2 3 5 x x 1. Eight points in one dimension and FIGURE the k-nearest-neighbor 4.11. The k-nearest-neighbor density esti-estimate of a two-dimensional density = 3 and 5. Note especially that the Notice discontinuities how such ain finite the slopes n estimate in the can be quite jagged, and notice th nerallyj. lie Corso away (SUNY fromatthe Buffalo) positionsnuities of thenonparametric prototype in slopes points. Methods generally From: Lecture occur Richard 5 along lines away 17 Feb. from29 the positions 32 / 49 of

Kernel Density Estimation k Nearest Neighbors k-nn Estimation From 1 Sample We don t expect the density estimate from 1 sample to be very good, but in the case of k-nn it will diverge! With n = 1 and k n = n = 1, the estimate for p n (x) is p n (x) = 1 2 x x 1 (23) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 33 / 49

Kernel Density Estimation k Nearest Neighbors But, as we increase the number of samples, the estimate will improve. 1 1 n=1 k n =1 1 1 2 3 4 1 1 2 3 4 n=16 k n =4 1 1 2 3 4 1 1 2 3 4 n=256 k n =16 1 1 2 3 4 1 1 2 3 4 n= k n = 1 2 3 4 1 2 3 4 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 34 / 49

Limitations Kernel Density Estimation k Nearest Neighbors The k n -NN Estimator suffers from an analogous flaw from which the Parzen window methods suffer. What is it? J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 35 / 49

Limitations Kernel Density Estimation k Nearest Neighbors The k n -NN Estimator suffers from an analogous flaw from which the Parzen window methods suffer. What is it? How do we specify the k n? We saw earlier that the specification of k n can lead to radically different density estimates (in practical situations where the number of training samples is limited). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 35 / 49

Limitations Kernel Density Estimation k Nearest Neighbors The k n -NN Estimator suffers from an analogous flaw from which the Parzen window methods suffer. What is it? How do we specify the k n? We saw earlier that the specification of k n can lead to radically different density estimates (in practical situations where the number of training samples is limited). One could obtain a sequence of estimates by taking k n = k 1 n and choose different values of k 1. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 35 / 49

Limitations Kernel Density Estimation k Nearest Neighbors The k n -NN Estimator suffers from an analogous flaw from which the Parzen window methods suffer. What is it? How do we specify the k n? We saw earlier that the specification of k n can lead to radically different density estimates (in practical situations where the number of training samples is limited). One could obtain a sequence of estimates by taking k n = k 1 n and choose different values of k 1. But, like the Parzen window size, one choice is as good as another absent any additional information. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 35 / 49

Limitations Kernel Density Estimation k Nearest Neighbors The k n -NN Estimator suffers from an analogous flaw from which the Parzen window methods suffer. What is it? How do we specify the k n? We saw earlier that the specification of k n can lead to radically different density estimates (in practical situations where the number of training samples is limited). One could obtain a sequence of estimates by taking k n = k 1 n and choose different values of k 1. But, like the Parzen window size, one choice is as good as another absent any additional information. Similarly, in classification scenarios, we can base our judgement on classification error. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 35 / 49

Kernel Density Estimation Kernel Density-Based Classification k-nn Posterior Estimation for Classification We can directly apply the k-nn methods to estimate the posterior probabilities P (ω i x) from a set of n labeled samples. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 36 / 49

Kernel Density Estimation Kernel Density-Based Classification k-nn Posterior Estimation for Classification We can directly apply the k-nn methods to estimate the posterior probabilities P (ω i x) from a set of n labeled samples. Place a window of volume V around x and capture k samples, with k i turning out to be of label ω i. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 36 / 49

Kernel Density Estimation Kernel Density-Based Classification k-nn Posterior Estimation for Classification We can directly apply the k-nn methods to estimate the posterior probabilities P (ω i x) from a set of n labeled samples. Place a window of volume V around x and capture k samples, with k i turning out to be of label ω i. The estimate for the joint probability is thus p n (x, ω i ) = k i nv (24) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 36 / 49

Kernel Density Estimation Kernel Density-Based Classification k-nn Posterior Estimation for Classification We can directly apply the k-nn methods to estimate the posterior probabilities P (ω i x) from a set of n labeled samples. Place a window of volume V around x and capture k samples, with k i turning out to be of label ω i. The estimate for the joint probability is thus p n (x, ω i ) = k i nv (24) A reasonable estimate for the posterior is thus P n (ω i x) = p n(x, ω i ) c p n(x, ω c ) = k i k (25) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 36 / 49

Kernel Density Estimation Kernel Density-Based Classification k-nn Posterior Estimation for Classification We can directly apply the k-nn methods to estimate the posterior probabilities P (ω i x) from a set of n labeled samples. Place a window of volume V around x and capture k samples, with k i turning out to be of label ω i. The estimate for the joint probability is thus p n (x, ω i ) = k i nv (24) A reasonable estimate for the posterior is thus P n (ω i x) = p n(x, ω i ) c p n(x, ω c ) = k i k (25) Hence, the posterior probability for ω i is simply the fraction of samples within the window that are labeled ω i. This is a simple and intuitive result. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 36 / 49

d Larry S. Davis IACS of Maryland rk, MD, USA Example: Figure-Ground Discrimination Example: Figure-Ground Discrimination Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. Figure-ground discrimination is an important low-level vision task. Want to separate the pixels that contain some foreground object (specified in some meaningful way) from the background. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 37 / 49

Example: Figure-Ground Discrimination Example: Figure-Ground Discrimination Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. This paper presents a method for figure-ground discrimination based on non-parametric densities for the foreground and background. They use a subset of the pixels from each of the two regions. They propose an algorithm called iterative sampling-expectation for performing the actual segmentation. The required input is simply a region of interest (mostly) containing the object. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 38 / 49

Example: Figure-Ground Discrimination Example: Figure-Ground Discrimination Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. Given a set of n samples S = {x i } where each x i is a d-dimensional vector. We know the kernel density estimate is defined as 1 ˆp(y) = nσ 1... σ d n i=1 j=1 d ( ) yj x ij ϕ where the same kernel ϕ with different bandwidth σ j is used in each dimension. σ j (26) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 39 / 49

Example: Figure-Ground Discrimination The Representation Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. The representation used here is a function of RGB: r = R/(R + G + B) (27) g = G/(R + G + B) (28) s = (R + G + B)/3 (29) Separating the chromaticity from the brightness allows them to us a wider bandwidth in the brightness dimension to account for variability due to shading effects. And, much narrower kernels can be used on the r and g chromaticity channels to enable better discrimination. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 4 / 49

Example: Figure-Ground Discrimination The Color Density Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. Given a sample of pixels S = {x i = (r i, g i, s i )}, the color density estimate is given by ˆP (x = (r, g, s)) = 1 n K σr (r r i )K σg (g g i )K σs (s s i ) (3) n i=1 where we have simplified the kernel definition: K σ (t) = 1 ( ) t σ ϕ σ (31) They use Gaussian kernels [ K σ (t) = 1 exp 1 2πσ 2 ( ) ] t 2 σ (32) with a different bandwidth in each dimension. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 41 / 49

Example: Figure-Ground Discrimination Data-Driven Bandwidth Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. The bandwidth for each channel is calculated directly from the image based on sample statistics. where ˆσ 2 is the sample variance. σ 1.6ˆσn 1/5 (33) J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 42 / 49

Example: Figure-Ground Discrimination Initialization: Choosing the Initial Scale Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. For initialization, they compute a distance between the foreground and background distribution by varying the scale of a single Gaussian kernel (on the foreground). To evaluate the significance of a particular scale, they compute the normalized KL-divergence: nkl( ˆP fg ˆP bg ) = n ˆP i=1 fg (x i ) log ˆP fg (x i ) ˆP bg (x i ) n ˆP i=1 fg (x i ) (34) where ˆP fg and ˆP bg are the density estimates for the foreground and background regions respectively. To compute each, they use about 6% of the pixels (using all of the pixels would lead to quite slow performance). J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 43 / 49

Example: Figure-Ground Discrimination F th Figure 2. Segmentation results at different scales J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 44 / 49

Example: Figure-Ground Discrimination Iterative Sampling-Expectation Algorithm Source: Zhao and Davis. Iterative Figure-Ground Discrimination. ICPR 24. Given the initial segmentation, they need to refine the models and labels to adapt better to the image. However, this is a chicken-and-egg problem. If we know the labels, we could compute the models, and if we knew the models, we could compute the best labels. J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 45 / 49