Lecture 3a: Dirichlet processes

Size: px
Start display at page:

Download "Lecture 3a: Dirichlet processes"

Transcription

1 Lecture 3a: Dirichlet processes Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London Advanced Topics in Machine Learning (MSc in Intelligent Systems) January 2008

2 Today s plan The Dirichlet distribution The Dirichlet process Representations of the Dirichlet process Infinite mixture models DP mixtures of Regressors DP mixtures of Dynamical Systems Guest speakers: Yee Whye Teh (Gatsby unit) David Barber (CSML) on 05/02

3 (Bayesian) Nonparametric models Real data is complex and the parametric assumption is often wrong. Nonparametric models allow us to relax the parametric assumption, bringing significant flexibility to our models. Nonparametric models lead to model selection/averaging behaviours without the cost of actually doing model selection/averaging. Nonparametric models are gaining popularity, spurred by growth in computational resources and inference algorithms. Examples: Gaussian processes (GPs) and Dirichlet processes (DPs). Other models which will not be discussed include Indian buffet processes, beta processes, tree processes,...

4 Gaussian processes in a nutshell A GP is a distribution over (latent) functions. A GP is defined by a Gaussian probability measure 1, i.e. it is characterised by a mean function m( ) and a covariance function k(, ). The consistency property of the Gaussian measure preserves the Gaussian parametric form when integrating out latent function values : Every finite subset of latent function values is jointly Gaussian The marginals are Gaussian Gaussian conditionals can be used to sample from a GP 1 Let A be the set of subsets of a space X. A measure G : A Ω assigns a nonnegative value to any subset of X. We say that G is a probability measure when Ω = [0, 1].

5 Dirichlet distribution The Dirichlet distribution is a distribution defined over K discrete random variables π 1,..., π K : (π 1,..., π K ) D(α 1,..., α K ) = Γ(P k α k) Q k Γ(α k) We say that the Dirichlet distribution is a distribution over the (K 1)-dimensional probability simplex: KY k=1 K = (π 1,..., π K ) : 0 π k 1, P k π k = 1. The mean and the variance are respectively given by where α P k α k. π k = α k α, π α k 1 k. D (π k π k ) 2E = α k(α α k ) α 2 (α + 1),

6 Examples

7 Properties of the Dirichlet distribution The Beta distribution corresponds to the Dirichlet distribution with K = 2: Γ(α + β) π B(α, β) = Γ(α)Γ(β) πα 1 (1 π) β 1. Agglomerative property: Let (π 1,..., π K ) D(α 1,..., α K ). Combining entries of probability vectors preserves the Dirichlet property: (π 1 + π 2, π 3,..., π K ) D(α 1 + α 2, α 3,..., α K ). The property holds for any partition of {1,..., K}. Decimative property: Let (π 1,..., π K ) D(α 1,..., α K ) and (τ 1, τ 2) D(α 1β 1, α 1β 2), with β 1 + β 2 = 1. The converse of the agglomerative property is also true: (π 1τ 1, π 1τ 2, π 2, π 3,..., π K ) D(α 1β 1, α 1β 2, α 2, α 3,..., α K ).

8 Dirichlet process (DP) A DP is a distribution over probability measures, such that marginals on finite partitions are Dirichlet distributed. Let G and H be probability measures over X. The random probability measure G is a Dirichlet process with base measure H( ) and intensity parameter α G( ) DP(α, H( )), if for any finite partition {A 1,..., A K } of X, we have (G(A 1),..., G(A K )) D (αh(a 1),..., αh(a K )). The mean distribution and its variance are respectively given by G(A) = H(A), where A is any measurable subset of X. D (G(A) G(A) ) 2E = H(A)(1 αh(a)), (α + 1)

9 The mean measure follows directly from the expression of the mean of the Dirichlet distribution: G(A k ) = αh(a k) Pk αh(a k ) = H(A k), where we used the fact that P k H(A k ) = 1. And so does the variance: D (G(A k ) H(A k )) 2E = αh(a k)( P k αh(a k ) αh(a k )) ( P k αh(a k )) 2 ( P k αh(a k ) + 1) = α2 H(A k )(1 αh(a k )) α 2 (α + 1) = H(A k)(1 αh(a k )). (α + 1) A 1 A 4 A 2 A 3 A 5 A 6

10 Posterior Dirichlet process Consider a fixed partition {A 1,..., A K } of X. If G( ) DP (α, H( )), then (G(A 1),..., G(A K )) D(αH(A 1),..., αh(a K )). We may treat the random measure G as a distribution over X, such that P(θ A k G) = G(A k ). The posterior is again a Dirichlet distribution: (G(A 1),..., G(A K )) θ D(αH(A 1) + δ θ (A 1),..., αh(a K ) + δ θ (A K )), where δ θ (A k ) = 1 if θ A k and 0 otherwise. The above is true for every finite partition of X, e.g. H(dθ) = p(θ)dθ. Hence, this suggests that the posterior process is a Dirichlet process: G( ) θ DP α + 1, αh( ) + δ «θ( ), α + 1 where δ θ ( ) is Dirac s delta centred at θ, which we call an atom.

11 Let us denote G(A k ) by π k and αh(a k ) by α k. We introduce the auxiliary indicator variable z, which takes the value k {1,..., K} when θ A k. Hence, we have (π 1,..., π K ) D(α 1,..., α K ), z π 1,..., π K Discrete(π 1,..., π K ). If z is Discrete with parameters π 1..., π K, then z takes the value k {1,..., K} with probability π k. The posterior is given by Bayes rule: Y p(π 1,..., π k z = k) π k k π αk 1 k π 1,..., π k z D(α 1 + δ z(1),..., α K + δ z(k)), where δ z( ) is Kronecker s delta centred at z. The marginal is given by p(z = k) = π k = α k P k α k z Discrete P α1,..., P α K. k α k k α k

12 Remark If G DP (α, H) and θ G G, then the posterior process and the marginal are respectively given by G θ DP α + 1, αh + δ «θ, α + 1 θ H. It can be shown that the converse is also true, such that G DP(α, H) θ G G θ H G θ DP α + 1, αh+δ θ. α+1

13 Blackwell-MacQueen urn scheme Blackwell-MacQueen urn scheme tells us how to sample sequentially from a DP. It is like a representer, i.e. a finite projection of an infinite object. It produces a sequence θ 1, θ 2,... with the following conditionals: θ N θ 1:N 1 αh + P N 1 α + N 1 n=1 δ θ n This can be interpreted as picking balls of different colours from an urn: Start with no balls in the urn. Draw θ N H with probability α. Add a ball of colour θ N in the urn. Pick a ball at random from the urn with probability N 1. Record its colour θ N and return the ball in the urn. Place a second ball of the same colour in urn. Hence, there is positive probability that θ N takes the same value θ n for some n, i.e. the θ 1,..., θ N cluster together (see CRP). The Blackwell-MacQueen urn scheme is also known as the Pòlya urn scheme..

14 First sample: G DP(α, H) θ 1 G G θ 1 H G θ 1 DP α + 1, αh+δ θ 1. α+1 Second sample: G θ 1 DP α + 1, αh+δ θ 1 α+1 θ 2 θ 1, G G N th sample: G θ 1:N 1 DP α + N 1, αh+p N 1 n α+n 1 θ 2 θ 1:N 1, G G. θ 2 θ 1 αh+δ θ 1 α+1 G θ 1, θ 2 DP α + 2, αh+δ θ 1 +δ θ2. α+2 δ θn «θ N θ 1:N 1 αh+p N 1 n δ θn α+n 1 «G θ 1:N DP α + N, αh+p N n δ θn. α+n

15 Chinese restaurant process (CRP) Random draws θ 1,..., θ N from a Blackwell-MacQueen urn scheme induce a random partition of 1,..., N: θ 1,..., θ N take K < N distinct values, say θ 1,..., θ K. Every drawn sequence defines a partition of 1,..., N into K clusters, such that if n is in cluster k, then θ n = θ k. The induced distribution over partitions is a CRP. Generating from the CRP: First customer sits at the first table. Customer N sits at: α A new table with probability α+n 1. N Table k with probability k α+n 1, where N k is the number of customers at table k. CRP exhibits the clustering property of DP: Tables are clusters, i.e. distinct values {θ k } K k=1 drawn from H. Customers are the actual realisations, i.e. θ n = θ z n where z n {1,..., K} indicates at which table the customer n sat.

16 Stick-breaking construction We would like to exhibit the discrete character of G DP(α, H). Consider θ H and the partition {θ, X\θ} of X. The posterior process G θ DP α + 1, αh+δ θ implies that α+1 (G(θ), G(X\θ)) θ D (αh(θ) + δ θ (θ), αh(x\θ) + δ θ (X\θ)) = D(1, α) G(θ) θ B(1, α). Hence, G has a point mass located at θ, such that G = βδ θ + (1 β)g, β B(1, α), where G is the (renormalized) probability measure without the point mass.

17 Stick-breaking construction (continued) What form has the probability measure G? Let {A 1,..., A K } be a partition of X\θ. Since G(X\θ) = P k G(A k), the agglomerative property of the Dirichlet leads to Hence, we see that (G(θ), G(A 1),..., G(A K )) θ D(1, αh(a 1),..., αh(a K )). G(θ) θ = β G(A 1) θ = (1 β)g (A 1). G(A K ) θ = (1 β)g (A K ). From the decimative property of Dirichlet, we get (G (A 1),..., G (A K )) D(αH(A 1),..., αh(a K )). In other words, G DP(α, H)!

18 Stick-breaking construction (continued) What form has the random measure G? where G DP(α, H) G = β 1δ θ 1 + (1 β 1)G 1 G 1 DP(α, H) G = β 1δ θ 1 + (1 β 1)(β 2δ θ 2 + (1 β 2)G 2) G 2 DP(α, H). G = X π nδ θ n n=1 n 1 Y π n = β n (1 β k ), β n B(1, α), θn H. k=1 Draws from the DP look like a weighted sum of point masses, where the weights are drawn from the stick-breaking construction 2. The steps are limited by assumptions of regularity on X and the smoothness on H. 2 The construction can be viewed as repeatedly breaking the remainder, which is of length 1 β n, of a unit-length stick according to a Beta distribution.

19 Sampling from a DP using the stick-breaking construction (demo: stick break) The base measure H is a unit Gaussian with zero mean. Note how the intensity parameter α regulates the number of spikes G 0.1 G !4!3!2! ! (a) α = 15. 0!4!3!2! ! (b) α = 150. Figure: The random meaasure G drawn from a Dirichlet process prior is discrete.

20 Exchangeability (de Finetti s theorem) Exchangeability is a fundamental property of Dirichlet processes, more specifically for inference. An infinite sequence θ 1, θ 2, θ 3,... of random variables is said to be infintely exchangeable if for any finite subset θ i1,..., θ in and any permutation θ j1,..., θ jn, we have P(θ i1,..., θ in ) = P(θ j1,..., θ jn ). The condition of exchangeability weaker than the assumption that they are independent and identically distributed Exchangeable observations are conditionally independent given some (usually) unobserved quantity to which some probability distribution would be assigned. Using de Finettis theorem, it is possible to show that draws form a DP are infinitely exchangeable

21 Density estimation with Dirichlet processes Consider a set of realisations {x n} N n=1 of the random variable X. Based on these observations, we would like to model the density of X. Would the following setting work? G DP(α, H), x n G G. Obviously not, as G is a discrete distribution (not a density). A solution is to convolve G DP(α, H) with a kernel F ( θ): Z X F ( ) = F ( θ)g(θ)dθ = π nf ( θn ), where we used the stick-breaking representation G(θ) = P n=1 πnδ θ n (θ). Each data point x n is modelled by a latent parameter θ n, which is drawn from an unknown distribution G on which a Dirichlet process prior is imposed: n=1 G DP(α, H), θ n G G, x n θ n F ( θ n).

22 Finite mixture models A finite mixture model makes the assumption that each data point x n was generated by a single mixture component with parameters θ k : x n X k π k F (x n θ k ). This can be captured by introducing a latent indicator variable z n. The model is formalised by the following priors and likelihood: θ k H π 1,..., π K D(α/K,..., α/k) z n π 1,..., π K Discrete(π 1,..., π K ) x n θ z n F ( θ z n ) Inference is performed on: The hyperparameters of the prior H. Parameter α of the Dirichlet prior on the weights. The number of components K. α π z i x i i = 1,..., n H θ k k = 1,..., K

23 DP mixture models If K is really large, there will be no overfitting if the parameters {θ k } and mixing proportions {π k } are integrated out or at most N (but usually much less) components will be active, i.e. associated with data. Recall the density estimation approach based on DP: X F ( ) = π k F ( θk ) k=1 G DP(α, H), X θ n G G = π k δθ k, k=1 x n θ k F ( θ n = θ k ). This is equivalent to a mixture model with a countably infinite number of components: where K. π 1,..., π K D(α/K,..., α/k), z n Discrete(π 1,..., π K ), x n z n F ( θ z n ),

24 DP mixture models (continued) DP mixture models are used to sidestep the model selection problem by replacing it by model averaging. They are used in applications in which the number of clusters is not known a priori or is believed to grow without bound as the amount of data grows. DPs can also be used in applications, where the number of latent objects is not known or unbounded: Nonparametric probabilistic context free grammars. Visual scene analysis. Infinite hidden Markov models/trees. Haplotype inference.... In many such applications it is important to be able to model the same set of objects in different contexts. This corresponds to the problem of grouped clustering and can be tackled using hierarchical Dirichlet processes (Teh et al., 2006). Inference for DP mixtures depends on the representations: MCMC based CRP, stick-breaking construction, etc. Variational inference based on stick-breaking construction (Blei and Jordan, 2005).

25 References The slides are largely based on Y.W. Teh s Tutorial and practical course on Dirichlet processes at Machine Learning Summer School Tutorial on Dirichlet processes at NIPS 2005 by M. I. Jordan. Blackwell, D. and MacQueen, J. B. (1973), Ferguson distributions via Plya urn schemes, Annals of Statistics, 1: Blei, D. M. and Jordan, M. I. (2006), Variational inference for Dirichlet process mixtures, Bayesian Analysis, 1(1): Ferguson, T. S. (1973), A Bayesian analysis of some nonparametric problems, Annals of Statistics, 1(2): Rasmussen, C. E. (2000), The infinite Gaussian mixture model, Advances in Neural Information Processing Systems, volume 12. Sethuraman, J. (1994), A constructive definition of Dirichlet priors, Statistica Sinica, 4: Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006), Hierarchical Dirichlet processes, Journal of the American Statistical Association, 101(476):

Dirichlet Processes: Tutorial and Practical Course

Dirichlet Processes: Tutorial and Practical Course Dirichlet Processes: Tutorial and Practical Course (updated) Yee Whye Teh Gatsby Computational Neuroscience Unit University College London August 2007 / MLSS Yee Whye Teh (Gatsby) DP August 2007 / MLSS

More information

Bayesian Nonparametrics

Bayesian Nonparametrics Bayesian Nonparametrics Lorenzo Rosasco 9.520 Class 18 April 11, 2011 About this class Goal To give an overview of some of the basic concepts in Bayesian Nonparametrics. In particular, to discuss Dirichelet

More information

Bayesian Nonparametrics: Models Based on the Dirichlet Process

Bayesian Nonparametrics: Models Based on the Dirichlet Process Bayesian Nonparametrics: Models Based on the Dirichlet Process Alessandro Panella Department of Computer Science University of Illinois at Chicago Machine Learning Seminar Series February 18, 2013 Alessandro

More information

Bayesian Nonparametrics: Dirichlet Process

Bayesian Nonparametrics: Dirichlet Process Bayesian Nonparametrics: Dirichlet Process Yee Whye Teh Gatsby Computational Neuroscience Unit, UCL http://www.gatsby.ucl.ac.uk/~ywteh/teaching/npbayes2012 Dirichlet Process Cornerstone of modern Bayesian

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

Dirichlet Process. Yee Whye Teh, University College London

Dirichlet Process. Yee Whye Teh, University College London Dirichlet Process Yee Whye Teh, University College London Related keywords: Bayesian nonparametrics, stochastic processes, clustering, infinite mixture model, Blackwell-MacQueen urn scheme, Chinese restaurant

More information

A Brief Overview of Nonparametric Bayesian Models

A Brief Overview of Nonparametric Bayesian Models A Brief Overview of Nonparametric Bayesian Models Eurandom Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin Also at Machine

More information

Non-parametric Clustering with Dirichlet Processes

Non-parametric Clustering with Dirichlet Processes Non-parametric Clustering with Dirichlet Processes Timothy Burns SUNY at Buffalo Mar. 31 2009 T. Burns (SUNY at Buffalo) Non-parametric Clustering with Dirichlet Processes Mar. 31 2009 1 / 24 Introduction

More information

Bayesian Nonparametrics for Speech and Signal Processing

Bayesian Nonparametrics for Speech and Signal Processing Bayesian Nonparametrics for Speech and Signal Processing Michael I. Jordan University of California, Berkeley June 28, 2011 Acknowledgments: Emily Fox, Erik Sudderth, Yee Whye Teh, and Romain Thibaux Computer

More information

Stochastic Processes, Kernel Regression, Infinite Mixture Models

Stochastic Processes, Kernel Regression, Infinite Mixture Models Stochastic Processes, Kernel Regression, Infinite Mixture Models Gabriel Huang (TA for Simon Lacoste-Julien) IFT 6269 : Probabilistic Graphical Models - Fall 2018 Stochastic Process = Random Function 2

More information

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I X i Ν CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr 2004 Dirichlet Process I Lecturer: Prof. Michael Jordan Scribe: Daniel Schonberg dschonbe@eecs.berkeley.edu 22.1 Dirichlet

More information

Dirichlet Processes and other non-parametric Bayesian models

Dirichlet Processes and other non-parametric Bayesian models Dirichlet Processes and other non-parametric Bayesian models Zoubin Ghahramani http://learning.eng.cam.ac.uk/zoubin/ zoubin@cs.cmu.edu Statistical Machine Learning CMU 10-702 / 36-702 Spring 2008 Model

More information

Bayesian nonparametric latent feature models

Bayesian nonparametric latent feature models Bayesian nonparametric latent feature models François Caron UBC October 2, 2007 / MLRG François Caron (UBC) Bayes. nonparametric latent feature models October 2, 2007 / MLRG 1 / 29 Overview 1 Introduction

More information

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu Lecture 16-17: Bayesian Nonparametrics I STAT 6474 Instructor: Hongxiao Zhu Plan for today Why Bayesian Nonparametrics? Dirichlet Distribution and Dirichlet Processes. 2 Parameter and Patterns Reference:

More information

Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes

Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes Yee Whye Teh (1), Michael I. Jordan (1,2), Matthew J. Beal (3) and David M. Blei (1) (1) Computer Science Div., (2) Dept. of Statistics

More information

Hierarchical Dirichlet Processes

Hierarchical Dirichlet Processes Hierarchical Dirichlet Processes Yee Whye Teh, Michael I. Jordan, Matthew J. Beal and David M. Blei Computer Science Div., Dept. of Statistics Dept. of Computer Science University of California at Berkeley

More information

Non-parametric Bayesian Methods

Non-parametric Bayesian Methods Non-parametric Bayesian Methods Uncertainty in Artificial Intelligence Tutorial July 25 Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London, UK Center for Automated Learning

More information

Bayesian nonparametrics

Bayesian nonparametrics Bayesian nonparametrics 1 Some preliminaries 1.1 de Finetti s theorem We will start our discussion with this foundational theorem. We will assume throughout all variables are defined on the probability

More information

Nonparametric Mixed Membership Models

Nonparametric Mixed Membership Models 5 Nonparametric Mixed Membership Models Daniel Heinz Department of Mathematics and Statistics, Loyola University of Maryland, Baltimore, MD 21210, USA CONTENTS 5.1 Introduction................................................................................

More information

Hierarchical Models, Nested Models and Completely Random Measures

Hierarchical Models, Nested Models and Completely Random Measures See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/238729763 Hierarchical Models, Nested Models and Completely Random Measures Article March 2012

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 4 Problem: Density Estimation We have observed data, y 1,..., y n, drawn independently from some unknown

More information

An Introduction to Bayesian Nonparametric Modelling

An Introduction to Bayesian Nonparametric Modelling An Introduction to Bayesian Nonparametric Modelling Yee Whye Teh Gatsby Computational Neuroscience Unit University College London October, 2009 / Toronto Outline Some Examples of Parametric Models Bayesian

More information

Foundations of Nonparametric Bayesian Methods

Foundations of Nonparametric Bayesian Methods 1 / 27 Foundations of Nonparametric Bayesian Methods Part II: Models on the Simplex Peter Orbanz http://mlg.eng.cam.ac.uk/porbanz/npb-tutorial.html 2 / 27 Tutorial Overview Part I: Basics Part II: Models

More information

Infinite latent feature models and the Indian Buffet Process

Infinite latent feature models and the Indian Buffet Process p.1 Infinite latent feature models and the Indian Buffet Process Tom Griffiths Cognitive and Linguistic Sciences Brown University Joint work with Zoubin Ghahramani p.2 Beyond latent classes Unsupervised

More information

Dirichlet Enhanced Latent Semantic Analysis

Dirichlet Enhanced Latent Semantic Analysis Dirichlet Enhanced Latent Semantic Analysis Kai Yu Siemens Corporate Technology D-81730 Munich, Germany Kai.Yu@siemens.com Shipeng Yu Institute for Computer Science University of Munich D-80538 Munich,

More information

Bayesian non parametric approaches: an introduction

Bayesian non parametric approaches: an introduction Introduction Latent class models Latent feature models Conclusion & Perspectives Bayesian non parametric approaches: an introduction Pierre CHAINAIS Bordeaux - nov. 2012 Trajectory 1 Bayesian non parametric

More information

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model

More information

Haupthseminar: Machine Learning. Chinese Restaurant Process, Indian Buffet Process

Haupthseminar: Machine Learning. Chinese Restaurant Process, Indian Buffet Process Haupthseminar: Machine Learning Chinese Restaurant Process, Indian Buffet Process Agenda Motivation Chinese Restaurant Process- CRP Dirichlet Process Interlude on CRP Infinite and CRP mixture model Estimation

More information

Bayesian Nonparametric Models

Bayesian Nonparametric Models Bayesian Nonparametric Models David M. Blei Columbia University December 15, 2015 Introduction We have been looking at models that posit latent structure in high dimensional data. We use the posterior

More information

Bayesian Hidden Markov Models and Extensions

Bayesian Hidden Markov Models and Extensions Bayesian Hidden Markov Models and Extensions Zoubin Ghahramani Department of Engineering University of Cambridge joint work with Matt Beal, Jurgen van Gael, Yunus Saatci, Tom Stepleton, Yee Whye Teh Modeling

More information

The Indian Buffet Process: An Introduction and Review

The Indian Buffet Process: An Introduction and Review Journal of Machine Learning Research 12 (2011) 1185-1224 Submitted 3/10; Revised 3/11; Published 4/11 The Indian Buffet Process: An Introduction and Review Thomas L. Griffiths Department of Psychology

More information

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process John Paisley Department of Computer Science Princeton University, Princeton, NJ jpaisley@princeton.edu Abstract We give a simple

More information

19 : Bayesian Nonparametrics: The Indian Buffet Process. 1 Latent Variable Models and the Indian Buffet Process

19 : Bayesian Nonparametrics: The Indian Buffet Process. 1 Latent Variable Models and the Indian Buffet Process 10-708: Probabilistic Graphical Models, Spring 2015 19 : Bayesian Nonparametrics: The Indian Buffet Process Lecturer: Avinava Dubey Scribes: Rishav Das, Adam Brodie, and Hemank Lamba 1 Latent Variable

More information

Bayesian Nonparametrics

Bayesian Nonparametrics Bayesian Nonparametrics Peter Orbanz Columbia University PARAMETERS AND PATTERNS Parameters P(X θ) = Probability[data pattern] 3 2 1 0 1 2 3 5 0 5 Inference idea data = underlying pattern + independent

More information

STAT Advanced Bayesian Inference

STAT Advanced Bayesian Inference 1 / 32 STAT 625 - Advanced Bayesian Inference Meng Li Department of Statistics Jan 23, 218 The Dirichlet distribution 2 / 32 θ Dirichlet(a 1,...,a k ) with density p(θ 1,θ 2,...,θ k ) = k j=1 Γ(a j) Γ(

More information

Lecture 1a: Basic Concepts and Recaps

Lecture 1a: Basic Concepts and Recaps Lecture 1a: Basic Concepts and Recaps Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced

More information

Image segmentation combining Markov Random Fields and Dirichlet Processes

Image segmentation combining Markov Random Fields and Dirichlet Processes Image segmentation combining Markov Random Fields and Dirichlet Processes Jessica SODJO IMS, Groupe Signal Image, Talence Encadrants : A. Giremus, J.-F. Giovannelli, F. Caron, N. Dobigeon Jessica SODJO

More information

CSCI 5822 Probabilistic Model of Human and Machine Learning. Mike Mozer University of Colorado

CSCI 5822 Probabilistic Model of Human and Machine Learning. Mike Mozer University of Colorado CSCI 5822 Probabilistic Model of Human and Machine Learning Mike Mozer University of Colorado Topics Language modeling Hierarchical processes Pitman-Yor processes Based on work of Teh (2006), A hierarchical

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Applied Nonparametric Bayes

Applied Nonparametric Bayes Applied Nonparametric Bayes Michael I. Jordan Department of Electrical Engineering and Computer Science Department of Statistics University of California, Berkeley http://www.cs.berkeley.edu/ jordan Acknowledgments:

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Infinite Latent Feature Models and the Indian Buffet Process

Infinite Latent Feature Models and the Indian Buffet Process Infinite Latent Feature Models and the Indian Buffet Process Thomas L. Griffiths Cognitive and Linguistic Sciences Brown University, Providence RI 292 tom griffiths@brown.edu Zoubin Ghahramani Gatsby Computational

More information

Bayesian nonparametric latent feature models

Bayesian nonparametric latent feature models Bayesian nonparametric latent feature models Indian Buffet process, beta process, and related models François Caron Department of Statistics, Oxford Applied Bayesian Statistics Summer School Como, Italy

More information

Hierarchical Dirichlet Processes with Random Effects

Hierarchical Dirichlet Processes with Random Effects Hierarchical Dirichlet Processes with Random Effects Seyoung Kim Department of Computer Science University of California, Irvine Irvine, CA 92697-34 sykim@ics.uci.edu Padhraic Smyth Department of Computer

More information

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix Infinite-State Markov-switching for Dynamic Volatility Models : Web Appendix Arnaud Dufays 1 Centre de Recherche en Economie et Statistique March 19, 2014 1 Comparison of the two MS-GARCH approximations

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Infinite Feature Models: The Indian Buffet Process Eric Xing Lecture 21, April 2, 214 Acknowledgement: slides first drafted by Sinead Williamson

More information

arxiv: v2 [stat.ml] 4 Aug 2011

arxiv: v2 [stat.ml] 4 Aug 2011 A Tutorial on Bayesian Nonparametric Models Samuel J. Gershman 1 and David M. Blei 2 1 Department of Psychology and Neuroscience Institute, Princeton University 2 Department of Computer Science, Princeton

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Nonparametric Bayesian Models --Learning/Reasoning in Open Possible Worlds Eric Xing Lecture 7, August 4, 2009 Reading: Eric Xing Eric Xing @ CMU, 2006-2009 Clustering Eric Xing

More information

Spatial Normalized Gamma Process

Spatial Normalized Gamma Process Spatial Normalized Gamma Process Vinayak Rao Yee Whye Teh Presented at NIPS 2009 Discussion and Slides by Eric Wang June 23, 2010 Outline Introduction Motivation The Gamma Process Spatial Normalized Gamma

More information

Tree-Based Inference for Dirichlet Process Mixtures

Tree-Based Inference for Dirichlet Process Mixtures Yang Xu Machine Learning Department School of Computer Science Carnegie Mellon University Pittsburgh, USA Katherine A. Heller Department of Engineering University of Cambridge Cambridge, UK Zoubin Ghahramani

More information

Applied Bayesian Nonparametrics 3. Infinite Hidden Markov Models

Applied Bayesian Nonparametrics 3. Infinite Hidden Markov Models Applied Bayesian Nonparametrics 3. Infinite Hidden Markov Models Tutorial at CVPR 2012 Erik Sudderth Brown University Work by E. Fox, E. Sudderth, M. Jordan, & A. Willsky AOAS 2011: A Sticky HDP-HMM with

More information

Spatial Bayesian Nonparametrics for Natural Image Segmentation

Spatial Bayesian Nonparametrics for Natural Image Segmentation Spatial Bayesian Nonparametrics for Natural Image Segmentation Erik Sudderth Brown University Joint work with Michael Jordan University of California Soumya Ghosh Brown University Parsing Visual Scenes

More information

MAD-Bayes: MAP-based Asymptotic Derivations from Bayes

MAD-Bayes: MAP-based Asymptotic Derivations from Bayes MAD-Bayes: MAP-based Asymptotic Derivations from Bayes Tamara Broderick Brian Kulis Michael I. Jordan Cat Clusters Mouse clusters Dog 1 Cat Clusters Dog Mouse Lizard Sheep Picture 1 Picture 2 Picture 3

More information

Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes. Yee Whye Teh

Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes. Yee Whye Teh Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes Yee Whye Teh Probabilistic model of language n-gram model Utility i-1 P(word i word i-n+1 ) Typically, trigram model (n=3) e.g., speech,

More information

Bayesian non-parametric model to longitudinally predict churn

Bayesian non-parametric model to longitudinally predict churn Bayesian non-parametric model to longitudinally predict churn Bruno Scarpa Università di Padova Conference of European Statistics Stakeholders Methodologists, Producers and Users of European Statistics

More information

Nonparametric Probabilistic Modelling

Nonparametric Probabilistic Modelling Nonparametric Probabilistic Modelling Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ Signal processing and inference

More information

Nonparametric Bayesian Models for Sparse Matrices and Covariances

Nonparametric Bayesian Models for Sparse Matrices and Covariances Nonparametric Bayesian Models for Sparse Matrices and Covariances Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ Bayes

More information

Construction of Dependent Dirichlet Processes based on Poisson Processes

Construction of Dependent Dirichlet Processes based on Poisson Processes 1 / 31 Construction of Dependent Dirichlet Processes based on Poisson Processes Dahua Lin Eric Grimson John Fisher CSAIL MIT NIPS 2010 Outstanding Student Paper Award Presented by Shouyuan Chen Outline

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Gaussian Processes Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London

More information

Distance dependent Chinese restaurant processes

Distance dependent Chinese restaurant processes David M. Blei Department of Computer Science, Princeton University 35 Olden St., Princeton, NJ 08540 Peter Frazier Department of Operations Research and Information Engineering, Cornell University 232

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Bayesian Nonparametric Models on Decomposable Graphs

Bayesian Nonparametric Models on Decomposable Graphs Bayesian Nonparametric Models on Decomposable Graphs François Caron INRIA Bordeaux Sud Ouest Institut de Mathématiques de Bordeaux University of Bordeaux, France francois.caron@inria.fr Arnaud Doucet Departments

More information

Probabilistic modeling of NLP

Probabilistic modeling of NLP Structured Bayesian Nonparametric Models with Variational Inference ACL Tutorial Prague, Czech Republic June 24, 2007 Percy Liang and Dan Klein Probabilistic modeling of NLP Document clustering Topic modeling

More information

Gentle Introduction to Infinite Gaussian Mixture Modeling

Gentle Introduction to Infinite Gaussian Mixture Modeling Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for

More information

Collapsed Variational Dirichlet Process Mixture Models

Collapsed Variational Dirichlet Process Mixture Models Collapsed Variational Dirichlet Process Mixture Models Kenichi Kurihara Dept. of Computer Science Tokyo Institute of Technology, Japan kurihara@mi.cs.titech.ac.jp Max Welling Dept. of Computer Science

More information

Bayesian Nonparametric Learning of Complex Dynamical Phenomena

Bayesian Nonparametric Learning of Complex Dynamical Phenomena Duke University Department of Statistical Science Bayesian Nonparametric Learning of Complex Dynamical Phenomena Emily Fox Joint work with Erik Sudderth (Brown University), Michael Jordan (UC Berkeley),

More information

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures 17th Europ. Conf. on Machine Learning, Berlin, Germany, 2006. Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures Shipeng Yu 1,2, Kai Yu 2, Volker Tresp 2, and Hans-Peter

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

Bayesian Structure Modeling. SPFLODD December 1, 2011

Bayesian Structure Modeling. SPFLODD December 1, 2011 Bayesian Structure Modeling SPFLODD December 1, 2011 Outline Defining Bayesian Parametric Bayesian models Latent Dirichlet allocabon (Blei et al., 2003) Bayesian HMM (Goldwater and Griffiths, 2007) A limle

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017 Bayesian Statistics Debdeep Pati Florida State University April 3, 2017 Finite mixture model The finite mixture of normals can be equivalently expressed as y i N(µ Si ; τ 1 S i ), S i k π h δ h h=1 δ h

More information

Lecturer: David Blei Lecture #3 Scribes: Jordan Boyd-Graber and Francisco Pereira October 1, 2007

Lecturer: David Blei Lecture #3 Scribes: Jordan Boyd-Graber and Francisco Pereira October 1, 2007 COS 597C: Bayesian Nonparametrics Lecturer: David Blei Lecture # Scribes: Jordan Boyd-Graber and Francisco Pereira October, 7 Gibbs Sampling with a DP First, let s recapitulate the model that we re using.

More information

TWO MODELS INVOLVING BAYESIAN NONPARAMETRIC TECHNIQUES

TWO MODELS INVOLVING BAYESIAN NONPARAMETRIC TECHNIQUES TWO MODELS INVOLVING BAYESIAN NONPARAMETRIC TECHNIQUES By SUBHAJIT SENGUPTA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

More information

Nonparmeteric Bayes & Gaussian Processes. Baback Moghaddam Machine Learning Group

Nonparmeteric Bayes & Gaussian Processes. Baback Moghaddam Machine Learning Group Nonparmeteric Bayes & Gaussian Processes Baback Moghaddam baback@jpl.nasa.gov Machine Learning Group Outline Bayesian Inference Hierarchical Models Model Selection Parametric vs. Nonparametric Gaussian

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

Non-parametric Bayesian Modeling and Fusion of Spatio-temporal Information Sources

Non-parametric Bayesian Modeling and Fusion of Spatio-temporal Information Sources th International Conference on Information Fusion Chicago, Illinois, USA, July -8, Non-parametric Bayesian Modeling and Fusion of Spatio-temporal Information Sources Priyadip Ray Department of Electrical

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Modelling and Tracking of Dynamic Spectra Using a Non-Parametric Bayesian Method

Modelling and Tracking of Dynamic Spectra Using a Non-Parametric Bayesian Method Proceedings of Acoustics 213 - Victor Harbor 17 2 November 213, Victor Harbor, Australia Modelling and Tracking of Dynamic Spectra Using a Non-Parametric Bayesian Method ABSTRACT Dragana Carevic Maritime

More information

Part IV: Monte Carlo and nonparametric Bayes

Part IV: Monte Carlo and nonparametric Bayes Part IV: Monte Carlo and nonparametric Bayes Outline Monte Carlo methods Nonparametric Bayesian models Outline Monte Carlo methods Nonparametric Bayesian models The Monte Carlo principle The expectation

More information

Latent Dirichlet Alloca/on

Latent Dirichlet Alloca/on Latent Dirichlet Alloca/on Blei, Ng and Jordan ( 2002 ) Presented by Deepak Santhanam What is Latent Dirichlet Alloca/on? Genera/ve Model for collec/ons of discrete data Data generated by parameters which

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model (& discussion on the GPLVM tech. report by Prof. N. Lawrence, 06) Andreas Damianou Department of Neuro- and Computer Science,

More information

Nonparametric Factor Analysis with Beta Process Priors

Nonparametric Factor Analysis with Beta Process Priors Nonparametric Factor Analysis with Beta Process Priors John Paisley Lawrence Carin Department of Electrical & Computer Engineering Duke University, Durham, NC 7708 jwp4@ee.duke.edu lcarin@ee.duke.edu Abstract

More information

Evolutionary Clustering by Hierarchical Dirichlet Process with Hidden Markov State

Evolutionary Clustering by Hierarchical Dirichlet Process with Hidden Markov State Evolutionary Clustering by Hierarchical Dirichlet Process with Hidden Markov State Tianbing Xu 1 Zhongfei (Mark) Zhang 1 1 Dept. of Computer Science State Univ. of New York at Binghamton Binghamton, NY

More information

Chapter 8 PROBABILISTIC MODELS FOR TEXT MINING. Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign

Chapter 8 PROBABILISTIC MODELS FOR TEXT MINING. Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign Chapter 8 PROBABILISTIC MODELS FOR TEXT MINING Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign sun22@illinois.edu Hongbo Deng Department of Computer Science University

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College

More information

A Process over all Stationary Covariance Kernels

A Process over all Stationary Covariance Kernels A Process over all Stationary Covariance Kernels Andrew Gordon Wilson June 9, 0 Abstract I define a process over all stationary covariance kernels. I show how one might be able to perform inference that

More information

Hierarchical Bayesian Nonparametric Models with Applications

Hierarchical Bayesian Nonparametric Models with Applications Hierarchical Bayesian Nonparametric Models with Applications Yee Whye Teh Gatsby Computational Neuroscience Unit University College London 17 Queen Square London WC1N 3AR, United Kingdom Michael I. Jordan

More information

Abstract INTRODUCTION

Abstract INTRODUCTION Nonparametric empirical Bayes for the Dirichlet process mixture model Jon D. McAuliffe David M. Blei Michael I. Jordan October 4, 2004 Abstract The Dirichlet process prior allows flexible nonparametric

More information

Dirichlet Process Based Evolutionary Clustering

Dirichlet Process Based Evolutionary Clustering Dirichlet Process Based Evolutionary Clustering Tianbing Xu 1 Zhongfei (Mark) Zhang 1 1 Dept. of Computer Science State Univ. of New York at Binghamton Binghamton, NY 13902, USA {txu,zhongfei,blong}@cs.binghamton.edu

More information

Nonparameteric Regression:

Nonparameteric Regression: Nonparameteric Regression: Nadaraya-Watson Kernel Regression & Gaussian Process Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro,

More information

Lecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models

Lecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models Advanced Machine Learning Lecture 10 Mixture Models II 30.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ Announcement Exercise sheet 2 online Sampling Rejection Sampling Importance

More information

Non-parametric Bayesian Methods for Structured Topic Models

Non-parametric Bayesian Methods for Structured Topic Models Non-parametric Bayesian Methods for Structured Topic Models Lan Du October, 2012 A thesis submitted for the degree of Doctor of Philosophy of the Australian National University To my wife, Jiejuan Wang,

More information

A Stick-Breaking Construction of the Beta Process

A Stick-Breaking Construction of the Beta Process John Paisley 1 jwp4@ee.duke.edu Aimee Zaas 2 aimee.zaas@duke.edu Christopher W. Woods 2 woods004@mc.duke.edu Geoffrey S. Ginsburg 2 ginsb005@duke.edu Lawrence Carin 1 lcarin@ee.duke.edu 1 Department of

More information

Hierarchical Bayesian Models of Language and Text

Hierarchical Bayesian Models of Language and Text Hierarchical Bayesian Models of Language and Text Yee Whye Teh Gatsby Computational Neuroscience Unit, UCL Joint work with Frank Wood *, Jan Gasthaus *, Cedric Archambeau, Lancelot James Overview Probabilistic

More information

Document and Topic Models: plsa and LDA

Document and Topic Models: plsa and LDA Document and Topic Models: plsa and LDA Andrew Levandoski and Jonathan Lobo CS 3750 Advanced Topics in Machine Learning 2 October 2018 Outline Topic Models plsa LSA Model Fitting via EM phits: link analysis

More information

arxiv: v1 [stat.ml] 8 Jan 2012

arxiv: v1 [stat.ml] 8 Jan 2012 A Split-Merge MCMC Algorithm for the Hierarchical Dirichlet Process Chong Wang David M. Blei arxiv:1201.1657v1 [stat.ml] 8 Jan 2012 Received: date / Accepted: date Abstract The hierarchical Dirichlet process

More information