Dirichlet Processes: Tutorial and Practical Course

Size: px
Start display at page:

Download "Dirichlet Processes: Tutorial and Practical Course"

Transcription

1 Dirichlet Processes: Tutorial and Practical Course (updated) Yee Whye Teh Gatsby Computational Neuroscience Unit University College London August 2007 / MLSS Yee Whye Teh (Gatsby) DP August 2007 / MLSS 1 / 90

2 Dirichlet Processes Dirichlet processes (DPs) are a class of Bayesian nonparametric models. Dirichlet processes are used for: Density estimation. Semiparametric modelling. Sidestepping model selection/averaging. I will give a tutorial on DPs, followed by a practical course on implementing DP mixture models in MATLAB. Prerequisites: understanding of the Bayesian paradigm (graphical models, mixture models, exponential families, Gaussian processes) you should know these from previous courses. Other tutorials on DPs: Zoubin Gharamani, UAI Michael Jordan, NIPS Volker Tresp, ICML nonparametric Bayes workshop Workshop on Bayesian Nonparametric Regression, Cambridge, July Yee Whye Teh (Gatsby) DP August 2007 / MLSS 2 / 90

3 Dirichlet Processes Dirichlet processes (DPs) are a class of Bayesian nonparametric models. Dirichlet processes are used for: Density estimation. Semiparametric modelling. Sidestepping model selection/averaging. I will give a tutorial on DPs, followed by a practical course on implementing DP mixture models in MATLAB. Prerequisites: understanding of the Bayesian paradigm (graphical models, mixture models, exponential families, Gaussian processes) you should know these from previous courses. Other tutorials on DPs: Zoubin Gharamani, UAI Michael Jordan, NIPS Volker Tresp, ICML nonparametric Bayes workshop Workshop on Bayesian Nonparametric Regression, Cambridge, July Yee Whye Teh (Gatsby) DP August 2007 / MLSS 2 / 90

4 Dirichlet Processes Dirichlet processes (DPs) are a class of Bayesian nonparametric models. Dirichlet processes are used for: Density estimation. Semiparametric modelling. Sidestepping model selection/averaging. I will give a tutorial on DPs, followed by a practical course on implementing DP mixture models in MATLAB. Prerequisites: understanding of the Bayesian paradigm (graphical models, mixture models, exponential families, Gaussian processes) you should know these from previous courses. Other tutorials on DPs: Zoubin Gharamani, UAI Michael Jordan, NIPS Volker Tresp, ICML nonparametric Bayes workshop Workshop on Bayesian Nonparametric Regression, Cambridge, July Yee Whye Teh (Gatsby) DP August 2007 / MLSS 2 / 90

5 Dirichlet Processes Dirichlet processes (DPs) are a class of Bayesian nonparametric models. Dirichlet processes are used for: Density estimation. Semiparametric modelling. Sidestepping model selection/averaging. I will give a tutorial on DPs, followed by a practical course on implementing DP mixture models in MATLAB. Prerequisites: understanding of the Bayesian paradigm (graphical models, mixture models, exponential families, Gaussian processes) you should know these from previous courses. Other tutorials on DPs: Zoubin Gharamani, UAI Michael Jordan, NIPS Volker Tresp, ICML nonparametric Bayes workshop Workshop on Bayesian Nonparametric Regression, Cambridge, July Yee Whye Teh (Gatsby) DP August 2007 / MLSS 2 / 90

6 Outline 1 Applications 2 Dirichlet Processes 3 Representations of Dirichlet Processes 4 Modelling Data with Dirichlet Processes 5 Practical Course Yee Whye Teh (Gatsby) DP August 2007 / MLSS 3 / 90

7 Outline 1 Applications 2 Dirichlet Processes 3 Representations of Dirichlet Processes 4 Modelling Data with Dirichlet Processes 5 Practical Course Yee Whye Teh (Gatsby) DP August 2007 / MLSS 4 / 90

8 Function Estimation Parametric function estimation (e.g. regression, classification) Data: x = {x 1, x 2,...}, y = {y 1, y 2,...} Model: y i = f (x i w) + N (0, σ 2 ) Prior over parameters Posterior over parameters p(w x, y) = p(w) p(w)p(y x, w) p(y x) Prediction with posteriors p(y x, x, y) = p(y x, w)p(w x, y) dw Yee Whye Teh (Gatsby) DP August 2007 / MLSS 5 / 90

9 Function Estimation Bayesian nonparametric function estimation with Gaussian processes Data: x = {x 1, x 2,...}, y = {y 1, y 2,...} Model: y i = f (x i ) + N (0, σ 2 ) Prior over functions Posterior over functions p(f x, y) = f GP(µ, Σ) p(f )p(y x, f ) p(y x) Prediction with posteriors p(y x, x, y) = p(y x, f )p(f x, y) df Yee Whye Teh (Gatsby) DP August 2007 / MLSS 6 / 90

10 Function Estimation Figure from Carl s lecture. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 7 / 90

11 Density Estimation Parametric density estimation (e.g. Gaussian, mixture models) Data: x = {x 1, x 2,...} Model: x i w F( w) Prior over parameters Posterior over parameters p(w) p(w x) = p(w)p(x w) p(x) Prediction with posteriors p(x x) = p(x w)p(w x) dw Yee Whye Teh (Gatsby) DP August 2007 / MLSS 8 / 90

12 Density Estimation Bayesian nonparametric density estimation with Dirichlet processes Data: x = {x 1, x 2,...} Model: x i F Prior over distributions F DP(α, H) Posterior over distributions p(f x) = p(f)p(x F) p(x) Prediction with posteriors p(x x) = p(x F )p(f x) df = F (x )p(f x) df Not quite correct; see later. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 9 / 90

13 Density Estimation Bayesian nonparametric density estimation with Dirichlet processes Data: x = {x 1, x 2,...} Model: x i F Prior over distributions F DP(α, H) Posterior over distributions p(f x) = p(f)p(x F) p(x) Prediction with posteriors p(x x) = p(x F )p(f x) df = F (x )p(f x) df Not quite correct; see later. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 9 / 90

14 Density Estimation Bayesian nonparametric density estimation with Dirichlet processes Data: x = {x 1, x 2,...} Model: x i F Prior over distributions F DP(α, H) Posterior over distributions p(f x) = p(f)p(x F) p(x) Prediction with posteriors p(x x) = p(x F )p(f x) df = F (x )p(f x) df Not quite correct; see later. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 9 / 90

15 Density Estimation Prior: !15!10! Red: mean density. Blue: median density. Grey: 5-95 quantile. Others: draws. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 10 / 90

16 Density Estimation Posterior: !15!10! Red: mean density. Blue: median density. Grey: 5-95 quantile. Black: data. Others: draws. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 11 / 90

17 Semiparametric Modelling Linear regression model for inferring effectiveness of new medical treatments. y ij = β x ij + b i z ij + ɛ ij y ij is outcome of jth trial on ith subject. x ij, z ij are predictors (treatment, dosage, age, health...). β are fixed-effects coefficients. b i are random-effects subject-specific coefficients. ɛ ij are noise terms. Care about inferring β. If x ij is treatment, we want to determine p(β > 0 x, y). Yee Whye Teh (Gatsby) DP August 2007 / MLSS 12 / 90

18 Semiparametric Modelling y ij = β x ij + b i z ij + ɛ ij Usually we assume Gaussian noise ɛ ij N (0, σ 2 ). Is this a sensible prior? Over-dispersion, skewness,... May be better to model noise nonparametrically, ɛ ij F F DP Also possible to model subject-specific random effects nonparametrically, b i G G DP Yee Whye Teh (Gatsby) DP August 2007 / MLSS 13 / 90

19 Model Selection/Averaging Data: x = {x 1, x 2,...} Models: p(θ k M k ), p(x θ k, M k ) Marginal likelihood p(x M k ) = p(x θ k, M k )p(θ k M k ) dθ k Model selection M = argmax M k p(x M k ) Model averaging p(x x) = M k p(x M k )p(m k x) = M k p(x M k ) p(x M k)p(m k ) p(x) But: is this computationally feasible? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 14 / 90

20 Model Selection/Averaging Data: x = {x 1, x 2,...} Models: p(θ k M k ), p(x θ k, M k ) Marginal likelihood p(x M k ) = p(x θ k, M k )p(θ k M k ) dθ k Model selection M = argmax M k p(x M k ) Model averaging p(x x) = M k p(x M k )p(m k x) = M k p(x M k ) p(x M k)p(m k ) p(x) But: is this computationally feasible? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 14 / 90

21 Model Selection/Averaging Data: x = {x 1, x 2,...} Models: p(θ k M k ), p(x θ k, M k ) Marginal likelihood p(x M k ) = p(x θ k, M k )p(θ k M k ) dθ k Model selection M = argmax M k p(x M k ) Model averaging p(x x) = M k p(x M k )p(m k x) = M k p(x M k ) p(x M k)p(m k ) p(x) But: is this computationally feasible? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 14 / 90

22 Model Selection/Averaging Marginal likelihood is usually extremely hard to compute. p(x M k ) = p(x θ k, M k )p(θ k M k ) dθ k Model selection/averaging is to prevent underfitting and overfitting. But reasonable and proper Bayesian methods should not overfit [Rasmussen and Ghahramani 2001]. Use a really large model M instead, and let the data speak for themselves. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 15 / 90

23 Model Selection/Averaging Clustering How many clusters are there? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 16 / 90

24 Model Selection/Averaging Spike Sorting How many neurons are there? [Görür 2007, Wood et al. 2006a] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 17 / 90

25 Model Selection/Averaging Topic Modelling How many topics are there? [Blei et al. 2004, Teh et al. 2006] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 18 / 90

26 Model Selection/Averaging Grammar Induction How many grammar symbols are there? Figure from Liang. [Liang et al. 2007b, Finkel et al. 2007] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 19 / 90

27 Model Selection/Averaging Visual Scene Analysis How many objects, parts, features? Figure from Sudderth. [Sudderth et al. 2007] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 20 / 90

28 Outline 1 Applications 2 Dirichlet Processes 3 Representations of Dirichlet Processes 4 Modelling Data with Dirichlet Processes 5 Practical Course Yee Whye Teh (Gatsby) DP August 2007 / MLSS 21 / 90

29 Finite Mixture Models A finite mixture model is defined as follows: θ k H π Dirichlet(α/K,..., α/k ) z i π Discrete(π) x i θ z i F( θ z i ) Model selection/averaging over: Hyperparameters in H. Dirichlet parameter α. Number of components K. Determining K hardest. α π z i x i i = 1,..., n H θ k k = 1,..., K Yee Whye Teh (Gatsby) DP August 2007 / MLSS 22 / 90

30 Infinite Mixture Models Imagine that K 0 is really large. If parameters θk and mixing proportions π integrated out, the number of latent variables left does not grow with K no overfitting. At most n components will be associated with data, aka active. Usually, the number of active components is much less than n. This gives an infinite mixture model. Demo: dpm_demo2d Issue 1: can we take this limit K? Issue 2: what is the corresponding limiting model? [Rasmussen 2000] α π z i x i i = 1,..., n H θk k = 1,..., K Yee Whye Teh (Gatsby) DP August 2007 / MLSS 23 / 90

31 Gaussian Processes What are they? A Gaussian process (GP) is a distribution over functions f : X R Denote f GP if f is a GP-distributed random function. For any finite set of input points x 1,..., x n, we require (f (x 1 ),..., f (x n )) to be a multivariate Gaussian. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 24 / 90

32 Gaussian Processes What are they? The GP is parametrized by its mean m(x) and covariance c(x, y) functions: f (x 1 ) m(x 1 ) c(x 1, x 1 )... c(x 1, x n ). N.,..... f (x n ) m(x n ) c(x n, x 1 )... c(x n, x n ) The above are finite dimensional marginal distributions of the GP. A salient property of these marginal distributions is that they are consistent: integrating out variables preserves the parametric form of the marginal distributions above. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 25 / 90

33 Gaussian Processes Visualizing Gaussian Processes. A sequence of input points x 1, x 2, x 3,... dense in X. Draw f (x 1 ) f (x 2 ) f (x 1 ) f (x 3 ) f (x 1 ), f (x 2 ).. Each conditional distribution is Gaussian since (f (x 1 ),..., f (x n )) is Gaussian. Demo: GPgenerate Yee Whye Teh (Gatsby) DP August 2007 / MLSS 26 / 90

34 Dirichlet Processes Start with Dirichlet distributions. A Dirichlet distribution is a distribution over the K -dimensional probability simplex: K = { (π 1,..., π K ) : π k 0, k π k = 1 } We say (π 1,..., π K ) is Dirichlet distributed, with parameters (α 1,..., α K ), if (π 1,..., π K ) Dirichlet(α 1,..., α K ) p(π 1,..., π K ) = Γ( k α k) k Γ(α k) n k=1 π α k 1 k Yee Whye Teh (Gatsby) DP August 2007 / MLSS 27 / 90

35 Dirichlet Processes Examples of Dirichlet distributions. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 28 / 90

36 Dirichlet Processes Agglomerative property of Dirichlet distributions. Combining entries of probability vectors preserves Dirichlet property, for example: (π 1,..., π K ) Dirichlet(α 1,..., α K ) (π 1 + π 2, π 3,..., π K ) Dirichlet(α 1 + α 2, α 3,..., α K ) Generally, if (I 1,..., I j ) is a partition of (1,..., n): i I1 π i,..., i I j π i Dirichlet α i,..., α i i I1 i Ij Yee Whye Teh (Gatsby) DP August 2007 / MLSS 29 / 90

37 Dirichlet Processes Decimative property of Dirichlet distributions. The converse of the agglomerative property is also true, for example if: (π 1,..., π K ) Dirichlet(α 1,..., α K ) (τ 1, τ 2 ) Dirichlet(α 1 β 1, α 1 β 2 ) with β 1 + β 2 = 1, (π 1 τ 1, π 1 τ 2, π 2,..., π K ) Dirichlet(α 1 β 1, α 2 β 2, α 2,..., α K ) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 30 / 90

38 Dirichlet Processes Visualizing Dirichlet Processes A Dirichlet process (DP) is an infinitely decimated Dirichlet distribution: 1 Dirichlet(α) (π 1, π 2 ) Dirichlet(α/2, α/2) π 1 + π 2 = 1 (π 11, π 12, π 21, π 22 ) Dirichlet(α/4, α/4, α/4, α/4) π i1 + π i2 = π i. Each decimation step involves drawing from a Beta distribution (a Dirichlet with 2 components) and multiplying into the relevant entry. Demo: DPgenerate Yee Whye Teh (Gatsby) DP August 2007 / MLSS 31 / 90

39 Dirichlet Processes A Proper but Non-Constructive Definition A probability measure is a function from subsets of a space X to [0, 1] satisfying certain properties. A Dirichlet Process (DP) is a distribution over probability measures. Denote G DP if G is a DP-distributed random probability measure. For any finite set of partitions A 1... A K = X, we require (G(A 1 ),..., G(A K )) to be Dirichlet distributed. A 1 A 4 A 2 A 3 A 5 A 6 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 32 / 90

40 Dirichlet Processes Parameters of the Dirichlet Process A DP has two parameters: Base distribution H, which is like the mean of the DP. Strength parameter α, which is like an inverse-variance of the DP. We write: G DP(α, H) if for any partition (A 1,..., A K ) of X: (G(A 1 ),..., G(A K )) Dirichlet(αH(A 1 ),..., αh(a K )) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 33 / 90

41 Dirichlet Processes Parameters of the Dirichlet Process A DP has two parameters: Base distribution H, which is like the mean of the DP. Strength parameter α, which is like an inverse-variance of the DP. We write: The first two cumulants of the DP: G DP(α, H) Expectation: Variance: E[G(A)] = H(A) V[G(A)] = H(A)(1 H(A)) α + 1 where A is any measurable subset of X. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 33 / 90

42 Dirichlet Processes Existence of Dirichlet Processes A probability measure is a function from subsets of a space X to [0, 1] satisfying certain properties. A DP is a distribution over probability measures such that marginals on finite partitions are Dirichlet distributed. How do we know that such an object exists?!? Kolmogorov Consistency Theorem: [Ferguson 1973]. de Finetti s Theorem: Blackwell-MacQueen urn scheme, Chinese restaurant process, [Blackwell and MacQueen 1973, Aldous 1985]. Stick-breaking Construction: [Sethuraman 1994]. Gamma Process: [Ferguson 1973]. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 34 / 90

43 Dirichlet Processes Existence of Dirichlet Processes A probability measure is a function from subsets of a space X to [0, 1] satisfying certain properties. A DP is a distribution over probability measures such that marginals on finite partitions are Dirichlet distributed. How do we know that such an object exists?!? Kolmogorov Consistency Theorem: [Ferguson 1973]. de Finetti s Theorem: Blackwell-MacQueen urn scheme, Chinese restaurant process, [Blackwell and MacQueen 1973, Aldous 1985]. Stick-breaking Construction: [Sethuraman 1994]. Gamma Process: [Ferguson 1973]. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 34 / 90

44 Dirichlet Processes Existence of Dirichlet Processes A probability measure is a function from subsets of a space X to [0, 1] satisfying certain properties. A DP is a distribution over probability measures such that marginals on finite partitions are Dirichlet distributed. How do we know that such an object exists?!? Kolmogorov Consistency Theorem: [Ferguson 1973]. de Finetti s Theorem: Blackwell-MacQueen urn scheme, Chinese restaurant process, [Blackwell and MacQueen 1973, Aldous 1985]. Stick-breaking Construction: [Sethuraman 1994]. Gamma Process: [Ferguson 1973]. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 34 / 90

45 Dirichlet Processes Existence of Dirichlet Processes A probability measure is a function from subsets of a space X to [0, 1] satisfying certain properties. A DP is a distribution over probability measures such that marginals on finite partitions are Dirichlet distributed. How do we know that such an object exists?!? Kolmogorov Consistency Theorem: [Ferguson 1973]. de Finetti s Theorem: Blackwell-MacQueen urn scheme, Chinese restaurant process, [Blackwell and MacQueen 1973, Aldous 1985]. Stick-breaking Construction: [Sethuraman 1994]. Gamma Process: [Ferguson 1973]. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 34 / 90

46 Dirichlet Processes Existence of Dirichlet Processes A probability measure is a function from subsets of a space X to [0, 1] satisfying certain properties. A DP is a distribution over probability measures such that marginals on finite partitions are Dirichlet distributed. How do we know that such an object exists?!? Kolmogorov Consistency Theorem: [Ferguson 1973]. de Finetti s Theorem: Blackwell-MacQueen urn scheme, Chinese restaurant process, [Blackwell and MacQueen 1973, Aldous 1985]. Stick-breaking Construction: [Sethuraman 1994]. Gamma Process: [Ferguson 1973]. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 34 / 90

47 Outline 1 Applications 2 Dirichlet Processes 3 Representations of Dirichlet Processes 4 Modelling Data with Dirichlet Processes 5 Practical Course Yee Whye Teh (Gatsby) DP August 2007 / MLSS 35 / 90

48 Dirichlet Processes Representations of Dirichlet Processes Suppose G DP(α, H). G is a (random) probability measure over X. We can treat it as a distribution over X. Let θ 1,..., θ n G be a random variable with distribution G. We saw in the demo that draws from a Dirichlet process seem to be discrete distributions. If so, then: G = π k δ θ k k=1 and there is positive probability that θ i s can take on the same value θ k for some k, i.e. the θ i s cluster together. In this section we are concerned with representations of Dirichlet processes based upon both the clustering property and the sum of point masses. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 36 / 90

49 Dirichlet Processes Representations of Dirichlet Processes Suppose G DP(α, H). G is a (random) probability measure over X. We can treat it as a distribution over X. Let θ 1,..., θ n G be a random variable with distribution G. We saw in the demo that draws from a Dirichlet process seem to be discrete distributions. If so, then: G = π k δ θ k k=1 and there is positive probability that θ i s can take on the same value θ k for some k, i.e. the θ i s cluster together. In this section we are concerned with representations of Dirichlet processes based upon both the clustering property and the sum of point masses. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 36 / 90

50 Dirichlet Processes Representations of Dirichlet Processes Suppose G DP(α, H). G is a (random) probability measure over X. We can treat it as a distribution over X. Let θ 1,..., θ n G be a random variable with distribution G. We saw in the demo that draws from a Dirichlet process seem to be discrete distributions. If so, then: G = π k δ θ k k=1 and there is positive probability that θ i s can take on the same value θ k for some k, i.e. the θ i s cluster together. In this section we are concerned with representations of Dirichlet processes based upon both the clustering property and the sum of point masses. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 36 / 90

51 Posterior Dirichlet Processes Sampling from a Dirichlet Process Suppose G is Dirichlet process distributed: G DP(α, H) G is a (random) probability measure over X. We can treat it as a distribution over X. Let θ G be a random variable with distribution G. We are interested in: p(θ) = p(θ G)p(G) dg p(g θ) = p(θ G)p(G) p(θ) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 37 / 90

52 Posterior Dirichlet Processes Conjugacy between Dirichlet Distribution and Multinomial Consider: (π 1,..., π K ) Dirichlet(α 1,..., α K ) z (π 1,..., π K ) Discrete(π 1,..., π K ) z is a multinomial variate, taking on value i {1,..., n} with probability π i. Then: ( z Discrete Pα1 i α i,..., ) P α K i α i (π 1,..., π K ) z Dirichlet(α 1 + δ 1 (z),..., α K + δ K (z)) where δ i (z) = 1 if z takes on value i, 0 otherwise. Converse also true. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 38 / 90

53 Posterior Dirichlet Processes Fix a partition (A 1,..., A K ) of X. Then (G(A 1 ),..., G(A K )) Dirichlet(αH(A 1 ),..., αh(a K )) P(θ A i G) = G(A i ) Using Dirichlet-multinomial conjugacy, P(θ A i ) = H(A i ) (G(A 1 ),..., G(A K )) θ Dirichlet(αH(A 1 )+δ θ (A 1 ),..., αh(a K )+δ θ (A K )) The above is true for every finite partition of X. In particular, taking a really fine partition, p(θ)dθ = H(dθ) Also, the posterior G θ is also a Dirichlet process: ( G θ DP α + 1, αh + δ ) θ α + 1 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 39 / 90

54 Posterior Dirichlet Processes Fix a partition (A 1,..., A K ) of X. Then (G(A 1 ),..., G(A K )) Dirichlet(αH(A 1 ),..., αh(a K )) P(θ A i G) = G(A i ) Using Dirichlet-multinomial conjugacy, P(θ A i ) = H(A i ) (G(A 1 ),..., G(A K )) θ Dirichlet(αH(A 1 )+δ θ (A 1 ),..., αh(a K )+δ θ (A K )) The above is true for every finite partition of X. In particular, taking a really fine partition, p(θ)dθ = H(dθ) Also, the posterior G θ is also a Dirichlet process: ( G θ DP α + 1, αh + δ ) θ α + 1 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 39 / 90

55 Posterior Dirichlet Processes Fix a partition (A 1,..., A K ) of X. Then (G(A 1 ),..., G(A K )) Dirichlet(αH(A 1 ),..., αh(a K )) P(θ A i G) = G(A i ) Using Dirichlet-multinomial conjugacy, P(θ A i ) = H(A i ) (G(A 1 ),..., G(A K )) θ Dirichlet(αH(A 1 )+δ θ (A 1 ),..., αh(a K )+δ θ (A K )) The above is true for every finite partition of X. In particular, taking a really fine partition, p(θ)dθ = H(dθ) Also, the posterior G θ is also a Dirichlet process: ( G θ DP α + 1, αh + δ ) θ α + 1 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 39 / 90

56 Posterior Dirichlet Processes Fix a partition (A 1,..., A K ) of X. Then (G(A 1 ),..., G(A K )) Dirichlet(αH(A 1 ),..., αh(a K )) P(θ A i G) = G(A i ) Using Dirichlet-multinomial conjugacy, P(θ A i ) = H(A i ) (G(A 1 ),..., G(A K )) θ Dirichlet(αH(A 1 )+δ θ (A 1 ),..., αh(a K )+δ θ (A K )) The above is true for every finite partition of X. In particular, taking a really fine partition, p(θ)dθ = H(dθ) Also, the posterior G θ is also a Dirichlet process: ( G θ DP α + 1, αh + δ ) θ α + 1 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 39 / 90

57 Posterior Dirichlet Processes G DP(α, H) θ G G θ H ( ) G θ DP α + 1, αh+δ θ α+1 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 40 / 90

58 Blackwell-MacQueen Urn Scheme First sample: θ 1 G G G DP(α, H) θ 1 H G θ 1 DP(α + 1, αh+δ θ 1 α+1 ) Second sample: θ 2 θ 1, G G G θ 1 DP(α + 1, αh+δ θ 1 α+1 ) θ 2 θ 1 αh+δ θ 1 α+1 G θ 1, θ 2 DP(α + 2, αh+δ θ 1 +δ θ2 α+2 ) n th sample θ n θ 1:n 1, G G θ n θ 1:n 1 αh+p n 1 i=1 δ θ i α+n 1 G θ 1:n 1 DP(α + n 1, αh+p n 1 i=1 δ θ i G θ 1:n DP(α + n, αh+ α+n 1 ) P n i=1 δ θ i α+n ) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 41 / 90

59 Blackwell-MacQueen Urn Scheme First sample: θ 1 G G G DP(α, H) θ 1 H G θ 1 DP(α + 1, αh+δ θ 1 α+1 ) Second sample: θ 2 θ 1, G G G θ 1 DP(α + 1, αh+δ θ 1 α+1 ) θ 2 θ 1 αh+δ θ 1 α+1 G θ 1, θ 2 DP(α + 2, αh+δ θ 1 +δ θ2 α+2 ) n th sample θ n θ 1:n 1, G G θ n θ 1:n 1 αh+p n 1 i=1 δ θ i α+n 1 G θ 1:n 1 DP(α + n 1, αh+p n 1 i=1 δ θ i G θ 1:n DP(α + n, αh+ α+n 1 ) P n i=1 δ θ i α+n ) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 41 / 90

60 Blackwell-MacQueen Urn Scheme First sample: θ 1 G G G DP(α, H) θ 1 H G θ 1 DP(α + 1, αh+δ θ 1 α+1 ) Second sample: θ 2 θ 1, G G G θ 1 DP(α + 1, αh+δ θ 1 α+1 ) θ 2 θ 1 αh+δ θ 1 α+1 G θ 1, θ 2 DP(α + 2, αh+δ θ 1 +δ θ2 α+2 ) n th sample θ n θ 1:n 1, G G θ n θ 1:n 1 αh+p n 1 i=1 δ θ i α+n 1 G θ 1:n 1 DP(α + n 1, αh+p n 1 i=1 δ θ i G θ 1:n DP(α + n, αh+ α+n 1 ) P n i=1 δ θ i α+n ) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 41 / 90

61 Blackwell-MacQueen Urn Scheme Blackwell-MacQueen urn scheme produces a sequence θ 1, θ 2,... with the following conditionals: θ n θ 1:n 1 αh + n 1 i=1 δ θ i α + n 1 Picking balls of different colors from an urn: Start with no balls in the urn. with probability α, draw θ n H, and add a ball of that color into the urn. With probability n 1, pick a ball at random from the urn, record θ n to be its color, return the ball into the urn and place a second ball of same color into urn. Blackwell-MacQueen urn scheme is like a representer for the DP a finite projection of an infinite object. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 42 / 90

62 Exchangeability and de Finetti s Theorem Starting with a DP, we constructed Blackwell-MacQueen urn scheme. The reverse is possible using de Finetti s Theorem. Since θ i are iid G, their joint distribution is invariant to permutations, thus θ 1, θ 2,... are exchangeable. Thus a distribution over measures must exist making them iid. This is the DP. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 43 / 90

63 Chinese Restaurant Process Draw θ 1,..., θ n from a Blackwell-MacQueen urn scheme. They take on K < n distinct values, say θ 1,..., θ K. This defines a partition of 1,..., n into K clusters, such that if i is in cluster k, then θ i = θ k. Random draws θ 1,..., θ n from a Blackwell-MacQueen urn scheme induces a random partition of 1,..., n. The induced distribution over partitions is a Chinese restaurant process (CRP). Yee Whye Teh (Gatsby) DP August 2007 / MLSS 44 / 90

64 Chinese Restaurant Process Generating from the CRP: First customer sits at the first table. Customer n sits at: Table k with probability at table k. A new table K + 1 with probability n k α+n 1 where n k is the number of customers α α+n 1. Customers integers, tables clusters. The CRP exhibits the clustering property of the DP. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 45 / 90

65 Chinese Restaurant Process Generating from the CRP: First customer sits at the first table. Customer n sits at: Table k with probability at table k. A new table K + 1 with probability n k α+n 1 where n k is the number of customers α α+n 1. Customers integers, tables clusters. The CRP exhibits the clustering property of the DP. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 45 / 90

66 Chinese Restaurant Process To get back from the CRP to Blackwell-MacQueen urn scheme, simply draw θ k H for k = 1,..., K, then for i = 1,..., n set θ i = θ z i where z i is the table that customer i sat at. The CRP teases apart the clustering property of the DP, from the base distribution. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 46 / 90

67 Stick-breaking Construction Returning to the posterior process: G DP(α, H) θ G G θ H G θ DP(α + 1, αh+δ θ α+1 ) Consider a partition (θ, X\θ) of X. We have: (G(θ), G(X\θ)) Dirichlet((α + 1) αh+δ θ α+1 (θ), (α + 1) αh+δ θ α+1 (X\θ)) = Dirichlet(1, α) G has a point mass located at θ: G = βδ θ + (1 β)g with β Beta(1, α) and G is the (renormalized) probability measure with the point mass removed. What is G? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 47 / 90

68 Stick-breaking Construction Returning to the posterior process: G DP(α, H) θ G G θ H G θ DP(α + 1, αh+δ θ α+1 ) Consider a partition (θ, X\θ) of X. We have: (G(θ), G(X\θ)) Dirichlet((α + 1) αh+δ θ α+1 (θ), (α + 1) αh+δ θ α+1 (X\θ)) = Dirichlet(1, α) G has a point mass located at θ: G = βδ θ + (1 β)g with β Beta(1, α) and G is the (renormalized) probability measure with the point mass removed. What is G? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 47 / 90

69 Stick-breaking Construction Returning to the posterior process: G DP(α, H) θ G G θ H G θ DP(α + 1, αh+δ θ α+1 ) Consider a partition (θ, X\θ) of X. We have: (G(θ), G(X\θ)) Dirichlet((α + 1) αh+δ θ α+1 (θ), (α + 1) αh+δ θ α+1 (X\θ)) = Dirichlet(1, α) G has a point mass located at θ: G = βδ θ + (1 β)g with β Beta(1, α) and G is the (renormalized) probability measure with the point mass removed. What is G? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 47 / 90

70 Stick-breaking Construction Returning to the posterior process: G DP(α, H) θ G G θ H G θ DP(α + 1, αh+δ θ α+1 ) Consider a partition (θ, X\θ) of X. We have: (G(θ), G(X\θ)) Dirichlet((α + 1) αh+δ θ α+1 (θ), (α + 1) αh+δ θ α+1 (X\θ)) = Dirichlet(1, α) G has a point mass located at θ: G = βδ θ + (1 β)g with β Beta(1, α) and G is the (renormalized) probability measure with the point mass removed. What is G? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 47 / 90

71 Stick-breaking Construction Currently, we have: G DP(α, H) θ G G DP(α + 1, αh+δ θ α+1 ) G = βδ θ + (1 β)g θ H β Beta(1, α) Consider a further partition (θ, A 1,..., A K ) of X: (G(θ), G(A 1 ),..., G(A K )) =(β, (1 β)g (A 1 ),..., (1 β)g (A K )) The agglomerative/decimative property of Dirichlet implies: (G (A 1 ),..., G (A K )) Dirichlet(αH(A 1 ),..., αh(a K )) G DP(α, H) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 48 / 90

72 Stick-breaking Construction Currently, we have: G DP(α, H) θ G G DP(α + 1, αh+δ θ α+1 ) G = βδ θ + (1 β)g θ H β Beta(1, α) Consider a further partition (θ, A 1,..., A K ) of X: (G(θ), G(A 1 ),..., G(A K )) =(β, (1 β)g (A 1 ),..., (1 β)g (A K )) Dirichlet(1, αh(a 1 ),..., αh(a K )) The agglomerative/decimative property of Dirichlet implies: (G (A 1 ),..., G (A K )) Dirichlet(αH(A 1 ),..., αh(a K )) G DP(α, H) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 48 / 90

73 Stick-breaking Construction Currently, we have: G DP(α, H) θ G G DP(α + 1, αh+δ θ α+1 ) G = βδ θ + (1 β)g θ H β Beta(1, α) Consider a further partition (θ, A 1,..., A K ) of X: (G(θ), G(A 1 ),..., G(A K )) =(β, (1 β)g (A 1 ),..., (1 β)g (A K )) Dirichlet(1, αh(a 1 ),..., αh(a K )) The agglomerative/decimative property of Dirichlet implies: (G (A 1 ),..., G (A K )) Dirichlet(αH(A 1 ),..., αh(a K )) G DP(α, H) Yee Whye Teh (Gatsby) DP August 2007 / MLSS 48 / 90

74 Stick-breaking Construction We have: where G DP(α, H) G = β 1 δ θ 1 + (1 β 1 )G 1 G = β 1 δ θ 1 + (1 β 1 )(β 2 δ θ 2 + (1 β 2 )G 2 ). G = π k δ θ k k=1 π k = β k k 1 i=1 (1 β i) β k Beta(1, α) θ k H This is the stick-breaking construction. Demo: SBgenerate Yee Whye Teh (Gatsby) DP August 2007 / MLSS 49 / 90

75 Stick-breaking Construction Starting with a DP, we showed that draws from the DP looks like a sum of point masses, with masses drawn from a stick-breaking construction. The steps are limited by assumptions of regularity on X and smoothness on H. [Sethuraman 1994] started with the stick-breaking construction, and showed that draws are indeed DP distributed, under very general conditions. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 50 / 90

76 Dirichlet Processes Representations of Dirichlet Processes Posterior Dirichlet process: G DP(α, H) θ G G Blackwell-MacQueen urn scheme: Chinese restaurant process: θ n θ 1:n 1 αh + n 1 i=1 δ θ i α + n 1 p(customer n sat at table k past) = i=1 θ H ( ) G θ DP α + 1, αh+δ θ α+1 { nk n 1+α α n 1+α if occupied table if new table Stick-breaking construction: k 1 π k = β k (1 β i ) β k Beta(1, α) θk H G = π k δ θ k k=1 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 51 / 90

77 Outline 1 Applications 2 Dirichlet Processes 3 Representations of Dirichlet Processes 4 Modelling Data with Dirichlet Processes 5 Practical Course Yee Whye Teh (Gatsby) DP August 2007 / MLSS 52 / 90

78 Density Estimation Recall our approach to density estimation with Dirichlet processes: G DP(α, H) x i G The above does not work. Why? Problem: G is a discrete distribution; in particular it has no density! Solution: Convolve the DP with a smooth distribution: G DP(α, H) F x ( ) = F ( θ)dg(θ) x i F x G = π k δ θ k k=1 F x ( ) = π k F ( θk ) k=1 x i F x Yee Whye Teh (Gatsby) DP August 2007 / MLSS 53 / 90

79 Density Estimation Recall our approach to density estimation with Dirichlet processes: G DP(α, H) x i G The above does not work. Why? Problem: G is a discrete distribution; in particular it has no density! Solution: Convolve the DP with a smooth distribution: G DP(α, H) F x ( ) = F ( θ)dg(θ) x i F x G = π k δ θ k k=1 F x ( ) = π k F ( θk ) k=1 x i F x Yee Whye Teh (Gatsby) DP August 2007 / MLSS 53 / 90

80 Density Estimation Recall our approach to density estimation with Dirichlet processes: G DP(α, H) x i G The above does not work. Why? Problem: G is a discrete distribution; in particular it has no density! Solution: Convolve the DP with a smooth distribution: G DP(α, H) F x ( ) = F ( θ)dg(θ) x i F x G = π k δ θ k k=1 F x ( ) = π k F ( θk ) k=1 x i F x Yee Whye Teh (Gatsby) DP August 2007 / MLSS 53 / 90

81 Density Estimation !15!10! F ( µ, Σ) is Gaussian with mean µ, covariance Σ. H(µ, Σ) is Gaussian-inverse-Wishart conjugate prior. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 54 / 90

82 Density Estimation !15!10! F ( µ, Σ) is Gaussian with mean µ, covariance Σ. H(µ, Σ) is Gaussian-inverse-Wishart conjugate prior. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 54 / 90

83 Clustering Recall our approach to density estimation: G = π k δ θ k DP(α, H) k=1 F x ( ) = π k F ( θk ) k=1 Above model equivalent to: x i F x z i Discrete(π) θ i = θ z i x i z i F ( θ i ) = F ( θ z i ) This is simply a mixture model with an infinite number of components. This is called a DP mixture model. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 55 / 90

84 Clustering DP mixture models are used in a variety of clustering applications, where the number of clusters is not known a priori. They are also used in applications in which we believe the number of clusters grows without bound as the amount of data grows. DPs have also found uses in applications beyond clustering, where the number of latent objects is not known or unbounded. Nonparametric probabilistic context free grammars. Visual scene analysis. Infinite hidden Markov models/trees. Haplotype inference.... In many such applications it is important to be able to model the same set of objects in different contexts. This corresponds to the problem of grouped clustering and can be tackled using hierarchical Dirichlet processes. [Teh et al. 2006] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 56 / 90

85 Grouped Clustering Yee Whye Teh (Gatsby) DP August 2007 / MLSS 57 / 90

86 Grouped Clustering Yee Whye Teh (Gatsby) DP August 2007 / MLSS 57 / 90

87 Hierarchical Dirichlet Processes H Hierarchical Dirichlet process: G 0 γ, H DP(γ, H) G j α, G 0 DP(α, G 0 ) θ ji G j G j γ α G 0 G j θji i = 1,..., n j = 1,..., J Yee Whye Teh (Gatsby) DP August 2007 / MLSS 58 / 90

88 Hierarchical Dirichlet Processes G 0 γ, H DP(γ, H) G 0 = β k δ θ k β γ Stick(γ) k=1 G j α, G 0 DP(α, G 0 ) G j = π jk δ θ k π j α, β DP(α, β) k=1 G 0 θ k H H G 1 G 2 Yee Whye Teh (Gatsby) DP August 2007 / MLSS 59 / 90

89 Summary Dirichlet process is just a glorified Dirichlet distribution. Draws from a DP are probability measures consisting of a weighted sum of point masses. Many representations: Blackwell-MacQueen urn scheme, Chinese restaurant process, stick-breaking construction. DP mixture models are mixture models with countably infinite number of components. I have not delved into: Applications. Generalizations, extensions, other nonparametric processes. Inference: MCMC sampling, variational approximation. Also see the tutorial material from Ghaharamani, Jordan and Tresp. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 60 / 90

90 Summary Dirichlet process is just a glorified Dirichlet distribution. Draws from a DP are probability measures consisting of a weighted sum of point masses. Many representations: Blackwell-MacQueen urn scheme, Chinese restaurant process, stick-breaking construction. DP mixture models are mixture models with countably infinite number of components. I have not delved into: Applications. Generalizations, extensions, other nonparametric processes. Inference: MCMC sampling, variational approximation. Also see the tutorial material from Ghaharamani, Jordan and Tresp. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 60 / 90

91 Bayesian Nonparametrics Parametric models can only capture a bounded amount of information from data, since they have bounded complexity. Real data is often complex and the parametric assumption is often wrong. Nonparametric models allow relaxation of the parametric assumption, bringing significant flexibility to our models of the world. Nonparametric models can also often lead to model selection/averaging behaviours without the cost of actually doing model selection/averaging. Nonparametric models are gaining popularity, spurred by growth in computational resources and inference algorithms. In addition to DPs, HDPs and their generalizations, other nonparametric models include Indian buffet processes, beta processes, tree processes... [Tutorials at Workshop on Bayesian Nonparametric Regression, Isaac Newton Institute, Cambridge, July 2007] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 61 / 90

92 Bayesian Nonparametrics Parametric models can only capture a bounded amount of information from data, since they have bounded complexity. Real data is often complex and the parametric assumption is often wrong. Nonparametric models allow relaxation of the parametric assumption, bringing significant flexibility to our models of the world. Nonparametric models can also often lead to model selection/averaging behaviours without the cost of actually doing model selection/averaging. Nonparametric models are gaining popularity, spurred by growth in computational resources and inference algorithms. In addition to DPs, HDPs and their generalizations, other nonparametric models include Indian buffet processes, beta processes, tree processes... [Tutorials at Workshop on Bayesian Nonparametric Regression, Isaac Newton Institute, Cambridge, July 2007] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 61 / 90

93 Bayesian Nonparametrics Parametric models can only capture a bounded amount of information from data, since they have bounded complexity. Real data is often complex and the parametric assumption is often wrong. Nonparametric models allow relaxation of the parametric assumption, bringing significant flexibility to our models of the world. Nonparametric models can also often lead to model selection/averaging behaviours without the cost of actually doing model selection/averaging. Nonparametric models are gaining popularity, spurred by growth in computational resources and inference algorithms. In addition to DPs, HDPs and their generalizations, other nonparametric models include Indian buffet processes, beta processes, tree processes... [Tutorials at Workshop on Bayesian Nonparametric Regression, Isaac Newton Institute, Cambridge, July 2007] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 61 / 90

94 Bayesian Nonparametrics Parametric models can only capture a bounded amount of information from data, since they have bounded complexity. Real data is often complex and the parametric assumption is often wrong. Nonparametric models allow relaxation of the parametric assumption, bringing significant flexibility to our models of the world. Nonparametric models can also often lead to model selection/averaging behaviours without the cost of actually doing model selection/averaging. Nonparametric models are gaining popularity, spurred by growth in computational resources and inference algorithms. In addition to DPs, HDPs and their generalizations, other nonparametric models include Indian buffet processes, beta processes, tree processes... [Tutorials at Workshop on Bayesian Nonparametric Regression, Isaac Newton Institute, Cambridge, July 2007] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 61 / 90

95 Bayesian Nonparametrics Parametric models can only capture a bounded amount of information from data, since they have bounded complexity. Real data is often complex and the parametric assumption is often wrong. Nonparametric models allow relaxation of the parametric assumption, bringing significant flexibility to our models of the world. Nonparametric models can also often lead to model selection/averaging behaviours without the cost of actually doing model selection/averaging. Nonparametric models are gaining popularity, spurred by growth in computational resources and inference algorithms. In addition to DPs, HDPs and their generalizations, other nonparametric models include Indian buffet processes, beta processes, tree processes... [Tutorials at Workshop on Bayesian Nonparametric Regression, Isaac Newton Institute, Cambridge, July 2007] Yee Whye Teh (Gatsby) DP August 2007 / MLSS 61 / 90

96 Outline 1 Applications 2 Dirichlet Processes 3 Representations of Dirichlet Processes 4 Modelling Data with Dirichlet Processes 5 Practical Course Yee Whye Teh (Gatsby) DP August 2007 / MLSS 62 / 90

97 Exploring the Dirichlet Process Before using DPs, it is important to understand its properties, so that we understand what prior assumptions we are imposing on our models. In this practical course we shall work towards implementing a DP mixture model to cluster NIPS papers, thus the relevant properties are the clustering properties of the DP. Consider the Chinese restaurant process representation of DPs: First customer sits at the first table. Customer n sits at: Table k with probability at table k. A new table K + 1 with probability n k α+n 1 where n k is the number of customers α α+n 1. How does number of clusters K scale as a function of α and of n (on average)? How does the number n k of customers sitting around table k depend on k and n (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 63 / 90

98 Exploring the Dirichlet Process Before using DPs, it is important to understand its properties, so that we understand what prior assumptions we are imposing on our models. In this practical course we shall work towards implementing a DP mixture model to cluster NIPS papers, thus the relevant properties are the clustering properties of the DP. Consider the Chinese restaurant process representation of DPs: First customer sits at the first table. Customer n sits at: Table k with probability at table k. A new table K + 1 with probability n k α+n 1 where n k is the number of customers α α+n 1. How does number of clusters K scale as a function of α and of n (on average)? How does the number n k of customers sitting around table k depend on k and n (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 63 / 90

99 Exploring the Dirichlet Process Before using DPs, it is important to understand its properties, so that we understand what prior assumptions we are imposing on our models. In this practical course we shall work towards implementing a DP mixture model to cluster NIPS papers, thus the relevant properties are the clustering properties of the DP. Consider the Chinese restaurant process representation of DPs: First customer sits at the first table. Customer n sits at: Table k with probability at table k. A new table K + 1 with probability n k α+n 1 where n k is the number of customers α α+n 1. How does number of clusters K scale as a function of α and of n (on average)? How does the number n k of customers sitting around table k depend on k and n (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 63 / 90

100 Exploring the Dirichlet Process Before using DPs, it is important to understand its properties, so that we understand what prior assumptions we are imposing on our models. In this practical course we shall work towards implementing a DP mixture model to cluster NIPS papers, thus the relevant properties are the clustering properties of the DP. Consider the Chinese restaurant process representation of DPs: First customer sits at the first table. Customer n sits at: Table k with probability at table k. A new table K + 1 with probability n k α+n 1 where n k is the number of customers α α+n 1. How does number of clusters K scale as a function of α and of n (on average)? How does the number n k of customers sitting around table k depend on k and n (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 63 / 90

101 Exploring the Pitman-Yor Process Sometimes the assumptions embedded in using DPs to model data are inappropriate. The Pitman-Yor process is a generalization of the DP that often has more appropriate properties. It has two parameters: d and α with 0 d < 1 and α > d. When d = 0 the Pitman-Yor process reduces to a DP. It also has a Chinese restaurant process representation: First customer sits at the first table. Customer n sits at: Table k with probability n k d where n α+n 1 k is the number of customers at table k. A new table K + 1 with probability α+kd. α+n 1 How does K scale as a function of n, α and d (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 64 / 90

102 Exploring the Pitman-Yor Process Sometimes the assumptions embedded in using DPs to model data are inappropriate. The Pitman-Yor process is a generalization of the DP that often has more appropriate properties. It has two parameters: d and α with 0 d < 1 and α > d. When d = 0 the Pitman-Yor process reduces to a DP. It also has a Chinese restaurant process representation: First customer sits at the first table. Customer n sits at: Table k with probability n k d where n α+n 1 k is the number of customers at table k. A new table K + 1 with probability α+kd. α+n 1 How does K scale as a function of n, α and d (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 64 / 90

103 Exploring the Pitman-Yor Process Sometimes the assumptions embedded in using DPs to model data are inappropriate. The Pitman-Yor process is a generalization of the DP that often has more appropriate properties. It has two parameters: d and α with 0 d < 1 and α > d. When d = 0 the Pitman-Yor process reduces to a DP. It also has a Chinese restaurant process representation: First customer sits at the first table. Customer n sits at: Table k with probability n k d where n α+n 1 k is the number of customers at table k. A new table K + 1 with probability α+kd. α+n 1 How does K scale as a function of n, α and d (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 64 / 90

104 Exploring the Pitman-Yor Process Sometimes the assumptions embedded in using DPs to model data are inappropriate. The Pitman-Yor process is a generalization of the DP that often has more appropriate properties. It has two parameters: d and α with 0 d < 1 and α > d. When d = 0 the Pitman-Yor process reduces to a DP. It also has a Chinese restaurant process representation: First customer sits at the first table. Customer n sits at: Table k with probability n k d where n α+n 1 k is the number of customers at table k. A new table K + 1 with probability α+kd. α+n 1 How does K scale as a function of n, α and d (on average)? Yee Whye Teh (Gatsby) DP August 2007 / MLSS 64 / 90

105 Dirichlet Process Mixture Models We model a data set x 1,..., x n using the following model: G DP(α, H) θ i G G x i θ i F( θ i ) for i = 1,..., n Each θ i is a latent parameter modelling x i, while G is the unknown distribution over parameters modelled using a DP. This is the basic DP mixture model. Implement a DP mixture model. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 65 / 90

106 Dirichlet Process Mixture Models We model a data set x 1,..., x n using the following model: G DP(α, H) θ i G G x i θ i F( θ i ) for i = 1,..., n Each θ i is a latent parameter modelling x i, while G is the unknown distribution over parameters modelled using a DP. This is the basic DP mixture model. Implement a DP mixture model. Yee Whye Teh (Gatsby) DP August 2007 / MLSS 65 / 90

107 Dirichlet Process Mixture Models Infinite Limit of Finite Mixture Models Different representations lead to different inference algorithms for DP mixture models. The most common are based on the Chinese restaurant process and on the stick-breaking construction. Here we shall work with the Chinese restaurant process representation, which, incidentally, can also be derived as the infinite limit of finite mixture models. A finite mixture model is defined as follows: θk H π Dirichlet(α/K,..., α/k ) z i π Discrete(π) x i θz i F ( θz i ) for k = 1,..., K for i = 1,..., n Yee Whye Teh (Gatsby) DP August 2007 / MLSS 66 / 90

108 Dirichlet Process Mixture Models Infinite Limit of Finite Mixture Models Different representations lead to different inference algorithms for DP mixture models. The most common are based on the Chinese restaurant process and on the stick-breaking construction. Here we shall work with the Chinese restaurant process representation, which, incidentally, can also be derived as the infinite limit of finite mixture models. A finite mixture model is defined as follows: θk H π Dirichlet(α/K,..., α/k ) z i π Discrete(π) x i θz i F ( θz i ) for k = 1,..., K for i = 1,..., n Yee Whye Teh (Gatsby) DP August 2007 / MLSS 66 / 90

109 Dirichlet Process Mixture Models Infinite Limit of Finite Mixture Models Different representations lead to different inference algorithms for DP mixture models. The most common are based on the Chinese restaurant process and on the stick-breaking construction. Here we shall work with the Chinese restaurant process representation, which, incidentally, can also be derived as the infinite limit of finite mixture models. A finite mixture model is defined as follows: θk H π Dirichlet(α/K,..., α/k ) z i π Discrete(π) x i θz i F ( θz i ) for k = 1,..., K for i = 1,..., n Yee Whye Teh (Gatsby) DP August 2007 / MLSS 66 / 90

110 Dirichlet Process Mixture Models Infinite Limit of Finite Mixture Models Different representations lead to different inference algorithms for DP mixture models. The most common are based on the Chinese restaurant process and on the stick-breaking construction. Here we shall work with the Chinese restaurant process representation, which, incidentally, can also be derived as the infinite limit of finite mixture models. A finite mixture model is defined as follows: θk H π Dirichlet(α/K,..., α/k ) z i π Discrete(π) x i θz i F ( θz i ) for k = 1,..., K for i = 1,..., n Yee Whye Teh (Gatsby) DP August 2007 / MLSS 66 / 90

Lecture 3a: Dirichlet processes

Lecture 3a: Dirichlet processes Lecture 3a: Dirichlet processes Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced Topics

More information

Bayesian Nonparametrics

Bayesian Nonparametrics Bayesian Nonparametrics Lorenzo Rosasco 9.520 Class 18 April 11, 2011 About this class Goal To give an overview of some of the basic concepts in Bayesian Nonparametrics. In particular, to discuss Dirichelet

More information

Bayesian Nonparametrics: Dirichlet Process

Bayesian Nonparametrics: Dirichlet Process Bayesian Nonparametrics: Dirichlet Process Yee Whye Teh Gatsby Computational Neuroscience Unit, UCL http://www.gatsby.ucl.ac.uk/~ywteh/teaching/npbayes2012 Dirichlet Process Cornerstone of modern Bayesian

More information

Bayesian Nonparametrics: Models Based on the Dirichlet Process

Bayesian Nonparametrics: Models Based on the Dirichlet Process Bayesian Nonparametrics: Models Based on the Dirichlet Process Alessandro Panella Department of Computer Science University of Illinois at Chicago Machine Learning Seminar Series February 18, 2013 Alessandro

More information

Dirichlet Process. Yee Whye Teh, University College London

Dirichlet Process. Yee Whye Teh, University College London Dirichlet Process Yee Whye Teh, University College London Related keywords: Bayesian nonparametrics, stochastic processes, clustering, infinite mixture model, Blackwell-MacQueen urn scheme, Chinese restaurant

More information

A Brief Overview of Nonparametric Bayesian Models

A Brief Overview of Nonparametric Bayesian Models A Brief Overview of Nonparametric Bayesian Models Eurandom Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin Also at Machine

More information

Non-parametric Clustering with Dirichlet Processes

Non-parametric Clustering with Dirichlet Processes Non-parametric Clustering with Dirichlet Processes Timothy Burns SUNY at Buffalo Mar. 31 2009 T. Burns (SUNY at Buffalo) Non-parametric Clustering with Dirichlet Processes Mar. 31 2009 1 / 24 Introduction

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

An Introduction to Bayesian Nonparametric Modelling

An Introduction to Bayesian Nonparametric Modelling An Introduction to Bayesian Nonparametric Modelling Yee Whye Teh Gatsby Computational Neuroscience Unit University College London October, 2009 / Toronto Outline Some Examples of Parametric Models Bayesian

More information

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I X i Ν CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr 2004 Dirichlet Process I Lecturer: Prof. Michael Jordan Scribe: Daniel Schonberg dschonbe@eecs.berkeley.edu 22.1 Dirichlet

More information

Dirichlet Processes and other non-parametric Bayesian models

Dirichlet Processes and other non-parametric Bayesian models Dirichlet Processes and other non-parametric Bayesian models Zoubin Ghahramani http://learning.eng.cam.ac.uk/zoubin/ zoubin@cs.cmu.edu Statistical Machine Learning CMU 10-702 / 36-702 Spring 2008 Model

More information

Bayesian Nonparametrics for Speech and Signal Processing

Bayesian Nonparametrics for Speech and Signal Processing Bayesian Nonparametrics for Speech and Signal Processing Michael I. Jordan University of California, Berkeley June 28, 2011 Acknowledgments: Emily Fox, Erik Sudderth, Yee Whye Teh, and Romain Thibaux Computer

More information

Bayesian nonparametric latent feature models

Bayesian nonparametric latent feature models Bayesian nonparametric latent feature models François Caron UBC October 2, 2007 / MLRG François Caron (UBC) Bayes. nonparametric latent feature models October 2, 2007 / MLRG 1 / 29 Overview 1 Introduction

More information

Non-parametric Bayesian Methods

Non-parametric Bayesian Methods Non-parametric Bayesian Methods Uncertainty in Artificial Intelligence Tutorial July 25 Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London, UK Center for Automated Learning

More information

Bayesian nonparametrics

Bayesian nonparametrics Bayesian nonparametrics 1 Some preliminaries 1.1 de Finetti s theorem We will start our discussion with this foundational theorem. We will assume throughout all variables are defined on the probability

More information

Bayesian Nonparametric Models

Bayesian Nonparametric Models Bayesian Nonparametric Models David M. Blei Columbia University December 15, 2015 Introduction We have been looking at models that posit latent structure in high dimensional data. We use the posterior

More information

Stochastic Processes, Kernel Regression, Infinite Mixture Models

Stochastic Processes, Kernel Regression, Infinite Mixture Models Stochastic Processes, Kernel Regression, Infinite Mixture Models Gabriel Huang (TA for Simon Lacoste-Julien) IFT 6269 : Probabilistic Graphical Models - Fall 2018 Stochastic Process = Random Function 2

More information

Infinite latent feature models and the Indian Buffet Process

Infinite latent feature models and the Indian Buffet Process p.1 Infinite latent feature models and the Indian Buffet Process Tom Griffiths Cognitive and Linguistic Sciences Brown University Joint work with Zoubin Ghahramani p.2 Beyond latent classes Unsupervised

More information

Image segmentation combining Markov Random Fields and Dirichlet Processes

Image segmentation combining Markov Random Fields and Dirichlet Processes Image segmentation combining Markov Random Fields and Dirichlet Processes Jessica SODJO IMS, Groupe Signal Image, Talence Encadrants : A. Giremus, J.-F. Giovannelli, F. Caron, N. Dobigeon Jessica SODJO

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 4 Problem: Density Estimation We have observed data, y 1,..., y n, drawn independently from some unknown

More information

Gentle Introduction to Infinite Gaussian Mixture Modeling

Gentle Introduction to Infinite Gaussian Mixture Modeling Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for

More information

Bayesian non parametric approaches: an introduction

Bayesian non parametric approaches: an introduction Introduction Latent class models Latent feature models Conclusion & Perspectives Bayesian non parametric approaches: an introduction Pierre CHAINAIS Bordeaux - nov. 2012 Trajectory 1 Bayesian non parametric

More information

STAT Advanced Bayesian Inference

STAT Advanced Bayesian Inference 1 / 32 STAT 625 - Advanced Bayesian Inference Meng Li Department of Statistics Jan 23, 218 The Dirichlet distribution 2 / 32 θ Dirichlet(a 1,...,a k ) with density p(θ 1,θ 2,...,θ k ) = k j=1 Γ(a j) Γ(

More information

Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes

Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes Yee Whye Teh (1), Michael I. Jordan (1,2), Matthew J. Beal (3) and David M. Blei (1) (1) Computer Science Div., (2) Dept. of Statistics

More information

Bayesian Nonparametrics

Bayesian Nonparametrics Bayesian Nonparametrics Peter Orbanz Columbia University PARAMETERS AND PATTERNS Parameters P(X θ) = Probability[data pattern] 3 2 1 0 1 2 3 5 0 5 Inference idea data = underlying pattern + independent

More information

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model

More information

19 : Bayesian Nonparametrics: The Indian Buffet Process. 1 Latent Variable Models and the Indian Buffet Process

19 : Bayesian Nonparametrics: The Indian Buffet Process. 1 Latent Variable Models and the Indian Buffet Process 10-708: Probabilistic Graphical Models, Spring 2015 19 : Bayesian Nonparametrics: The Indian Buffet Process Lecturer: Avinava Dubey Scribes: Rishav Das, Adam Brodie, and Hemank Lamba 1 Latent Variable

More information

The Indian Buffet Process: An Introduction and Review

The Indian Buffet Process: An Introduction and Review Journal of Machine Learning Research 12 (2011) 1185-1224 Submitted 3/10; Revised 3/11; Published 4/11 The Indian Buffet Process: An Introduction and Review Thomas L. Griffiths Department of Psychology

More information

Nonparametric Mixed Membership Models

Nonparametric Mixed Membership Models 5 Nonparametric Mixed Membership Models Daniel Heinz Department of Mathematics and Statistics, Loyola University of Maryland, Baltimore, MD 21210, USA CONTENTS 5.1 Introduction................................................................................

More information

Spatial Bayesian Nonparametrics for Natural Image Segmentation

Spatial Bayesian Nonparametrics for Natural Image Segmentation Spatial Bayesian Nonparametrics for Natural Image Segmentation Erik Sudderth Brown University Joint work with Michael Jordan University of California Soumya Ghosh Brown University Parsing Visual Scenes

More information

Hierarchical Dirichlet Processes

Hierarchical Dirichlet Processes Hierarchical Dirichlet Processes Yee Whye Teh, Michael I. Jordan, Matthew J. Beal and David M. Blei Computer Science Div., Dept. of Statistics Dept. of Computer Science University of California at Berkeley

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Infinite Feature Models: The Indian Buffet Process Eric Xing Lecture 21, April 2, 214 Acknowledgement: slides first drafted by Sinead Williamson

More information

Nonparmeteric Bayes & Gaussian Processes. Baback Moghaddam Machine Learning Group

Nonparmeteric Bayes & Gaussian Processes. Baback Moghaddam Machine Learning Group Nonparmeteric Bayes & Gaussian Processes Baback Moghaddam baback@jpl.nasa.gov Machine Learning Group Outline Bayesian Inference Hierarchical Models Model Selection Parametric vs. Nonparametric Gaussian

More information

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu Lecture 16-17: Bayesian Nonparametrics I STAT 6474 Instructor: Hongxiao Zhu Plan for today Why Bayesian Nonparametrics? Dirichlet Distribution and Dirichlet Processes. 2 Parameter and Patterns Reference:

More information

Haupthseminar: Machine Learning. Chinese Restaurant Process, Indian Buffet Process

Haupthseminar: Machine Learning. Chinese Restaurant Process, Indian Buffet Process Haupthseminar: Machine Learning Chinese Restaurant Process, Indian Buffet Process Agenda Motivation Chinese Restaurant Process- CRP Dirichlet Process Interlude on CRP Infinite and CRP mixture model Estimation

More information

Applied Nonparametric Bayes

Applied Nonparametric Bayes Applied Nonparametric Bayes Michael I. Jordan Department of Electrical Engineering and Computer Science Department of Statistics University of California, Berkeley http://www.cs.berkeley.edu/ jordan Acknowledgments:

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes. Yee Whye Teh

Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes. Yee Whye Teh Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes Yee Whye Teh Probabilistic model of language n-gram model Utility i-1 P(word i word i-n+1 ) Typically, trigram model (n=3) e.g., speech,

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College

More information

CSCI 5822 Probabilistic Model of Human and Machine Learning. Mike Mozer University of Colorado

CSCI 5822 Probabilistic Model of Human and Machine Learning. Mike Mozer University of Colorado CSCI 5822 Probabilistic Model of Human and Machine Learning Mike Mozer University of Colorado Topics Language modeling Hierarchical processes Pitman-Yor processes Based on work of Teh (2006), A hierarchical

More information

Foundations of Nonparametric Bayesian Methods

Foundations of Nonparametric Bayesian Methods 1 / 27 Foundations of Nonparametric Bayesian Methods Part II: Models on the Simplex Peter Orbanz http://mlg.eng.cam.ac.uk/porbanz/npb-tutorial.html 2 / 27 Tutorial Overview Part I: Basics Part II: Models

More information

Hierarchical Models, Nested Models and Completely Random Measures

Hierarchical Models, Nested Models and Completely Random Measures See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/238729763 Hierarchical Models, Nested Models and Completely Random Measures Article March 2012

More information

Bayesian non-parametric model to longitudinally predict churn

Bayesian non-parametric model to longitudinally predict churn Bayesian non-parametric model to longitudinally predict churn Bruno Scarpa Università di Padova Conference of European Statistics Stakeholders Methodologists, Producers and Users of European Statistics

More information

Bayesian Hidden Markov Models and Extensions

Bayesian Hidden Markov Models and Extensions Bayesian Hidden Markov Models and Extensions Zoubin Ghahramani Department of Engineering University of Cambridge joint work with Matt Beal, Jurgen van Gael, Yunus Saatci, Tom Stepleton, Yee Whye Teh Modeling

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Gaussian Processes Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London

More information

Infinite Latent Feature Models and the Indian Buffet Process

Infinite Latent Feature Models and the Indian Buffet Process Infinite Latent Feature Models and the Indian Buffet Process Thomas L. Griffiths Cognitive and Linguistic Sciences Brown University, Providence RI 292 tom griffiths@brown.edu Zoubin Ghahramani Gatsby Computational

More information

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process John Paisley Department of Computer Science Princeton University, Princeton, NJ jpaisley@princeton.edu Abstract We give a simple

More information

Bayesian nonparametric latent feature models

Bayesian nonparametric latent feature models Bayesian nonparametric latent feature models Indian Buffet process, beta process, and related models François Caron Department of Statistics, Oxford Applied Bayesian Statistics Summer School Como, Italy

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning

More information

Clustering using Mixture Models

Clustering using Mixture Models Clustering using Mixture Models The full posterior of the Gaussian Mixture Model is p(x, Z, µ,, ) =p(x Z, µ, )p(z )p( )p(µ, ) data likelihood (Gaussian) correspondence prob. (Multinomial) mixture prior

More information

Hierarchical Dirichlet Processes with Random Effects

Hierarchical Dirichlet Processes with Random Effects Hierarchical Dirichlet Processes with Random Effects Seyoung Kim Department of Computer Science University of California, Irvine Irvine, CA 92697-34 sykim@ics.uci.edu Padhraic Smyth Department of Computer

More information

Latent Dirichlet Alloca/on

Latent Dirichlet Alloca/on Latent Dirichlet Alloca/on Blei, Ng and Jordan ( 2002 ) Presented by Deepak Santhanam What is Latent Dirichlet Alloca/on? Genera/ve Model for collec/ons of discrete data Data generated by parameters which

More information

Probabilistic modeling of NLP

Probabilistic modeling of NLP Structured Bayesian Nonparametric Models with Variational Inference ACL Tutorial Prague, Czech Republic June 24, 2007 Percy Liang and Dan Klein Probabilistic modeling of NLP Document clustering Topic modeling

More information

Nonparametric Bayesian Models for Sparse Matrices and Covariances

Nonparametric Bayesian Models for Sparse Matrices and Covariances Nonparametric Bayesian Models for Sparse Matrices and Covariances Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ Bayes

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes CSci 8980: Advanced Topics in Graphical Models Gaussian Processes Instructor: Arindam Banerjee November 15, 2007 Gaussian Processes Outline Gaussian Processes Outline Parametric Bayesian Regression Gaussian

More information

An Overview of Nonparametric Bayesian Models and Applications to Natural Language Processing

An Overview of Nonparametric Bayesian Models and Applications to Natural Language Processing An Overview of Nonparametric Bayesian Models and Applications to Natural Language Processing Narges Sharif-Razavian and Andreas Zollmann School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Spatial Normalized Gamma Process

Spatial Normalized Gamma Process Spatial Normalized Gamma Process Vinayak Rao Yee Whye Teh Presented at NIPS 2009 Discussion and Slides by Eric Wang June 23, 2010 Outline Introduction Motivation The Gamma Process Spatial Normalized Gamma

More information

Tree-Based Inference for Dirichlet Process Mixtures

Tree-Based Inference for Dirichlet Process Mixtures Yang Xu Machine Learning Department School of Computer Science Carnegie Mellon University Pittsburgh, USA Katherine A. Heller Department of Engineering University of Cambridge Cambridge, UK Zoubin Ghahramani

More information

Bayesian Structure Modeling. SPFLODD December 1, 2011

Bayesian Structure Modeling. SPFLODD December 1, 2011 Bayesian Structure Modeling SPFLODD December 1, 2011 Outline Defining Bayesian Parametric Bayesian models Latent Dirichlet allocabon (Blei et al., 2003) Bayesian HMM (Goldwater and Griffiths, 2007) A limle

More information

Dirichlet Enhanced Latent Semantic Analysis

Dirichlet Enhanced Latent Semantic Analysis Dirichlet Enhanced Latent Semantic Analysis Kai Yu Siemens Corporate Technology D-81730 Munich, Germany Kai.Yu@siemens.com Shipeng Yu Institute for Computer Science University of Munich D-80538 Munich,

More information

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model (& discussion on the GPLVM tech. report by Prof. N. Lawrence, 06) Andreas Damianou Department of Neuro- and Computer Science,

More information

Lecture 6: Graphical Models: Learning

Lecture 6: Graphical Models: Learning Lecture 6: Graphical Models: Learning 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering, University of Cambridge February 3rd, 2010 Ghahramani & Rasmussen (CUED)

More information

Nonparametric Probabilistic Modelling

Nonparametric Probabilistic Modelling Nonparametric Probabilistic Modelling Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ Signal processing and inference

More information

arxiv: v2 [stat.ml] 4 Aug 2011

arxiv: v2 [stat.ml] 4 Aug 2011 A Tutorial on Bayesian Nonparametric Models Samuel J. Gershman 1 and David M. Blei 2 1 Department of Psychology and Neuroscience Institute, Princeton University 2 Department of Computer Science, Princeton

More information

Learning Bayesian network : Given structure and completely observed data

Learning Bayesian network : Given structure and completely observed data Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution

More information

Hybrid Dirichlet processes for functional data

Hybrid Dirichlet processes for functional data Hybrid Dirichlet processes for functional data Sonia Petrone Università Bocconi, Milano Joint work with Michele Guindani - U.T. MD Anderson Cancer Center, Houston and Alan Gelfand - Duke University, USA

More information

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix Infinite-State Markov-switching for Dynamic Volatility Models : Web Appendix Arnaud Dufays 1 Centre de Recherche en Economie et Statistique March 19, 2014 1 Comparison of the two MS-GARCH approximations

More information

Generative Clustering, Topic Modeling, & Bayesian Inference

Generative Clustering, Topic Modeling, & Bayesian Inference Generative Clustering, Topic Modeling, & Bayesian Inference INFO-4604, Applied Machine Learning University of Colorado Boulder December 12-14, 2017 Prof. Michael Paul Unsupervised Naïve Bayes Last week

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

Hierarchical Bayesian Nonparametric Models with Applications

Hierarchical Bayesian Nonparametric Models with Applications Hierarchical Bayesian Nonparametric Models with Applications Yee Whye Teh Gatsby Computational Neuroscience Unit University College London 17 Queen Square London WC1N 3AR, United Kingdom Michael I. Jordan

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Nonparametric Bayesian Models --Learning/Reasoning in Open Possible Worlds Eric Xing Lecture 7, August 4, 2009 Reading: Eric Xing Eric Xing @ CMU, 2006-2009 Clustering Eric Xing

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES

INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES SHILIANG SUN Department of Computer Science and Technology, East China Normal University 500 Dongchuan Road, Shanghai 20024, China E-MAIL: slsun@cs.ecnu.edu.cn,

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Nonparameteric Regression:

Nonparameteric Regression: Nonparameteric Regression: Nadaraya-Watson Kernel Regression & Gaussian Process Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro,

More information

Bayesian Nonparametric Learning of Complex Dynamical Phenomena

Bayesian Nonparametric Learning of Complex Dynamical Phenomena Duke University Department of Statistical Science Bayesian Nonparametric Learning of Complex Dynamical Phenomena Emily Fox Joint work with Erik Sudderth (Brown University), Michael Jordan (UC Berkeley),

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Lecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models

Lecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models Advanced Machine Learning Lecture 10 Mixture Models II 30.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ Announcement Exercise sheet 2 online Sampling Rejection Sampling Importance

More information

Chapter 8 PROBABILISTIC MODELS FOR TEXT MINING. Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign

Chapter 8 PROBABILISTIC MODELS FOR TEXT MINING. Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign Chapter 8 PROBABILISTIC MODELS FOR TEXT MINING Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign sun22@illinois.edu Hongbo Deng Department of Computer Science University

More information

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures 17th Europ. Conf. on Machine Learning, Berlin, Germany, 2006. Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures Shipeng Yu 1,2, Kai Yu 2, Volker Tresp 2, and Hans-Peter

More information

Applied Bayesian Nonparametrics 3. Infinite Hidden Markov Models

Applied Bayesian Nonparametrics 3. Infinite Hidden Markov Models Applied Bayesian Nonparametrics 3. Infinite Hidden Markov Models Tutorial at CVPR 2012 Erik Sudderth Brown University Work by E. Fox, E. Sudderth, M. Jordan, & A. Willsky AOAS 2011: A Sticky HDP-HMM with

More information

Collapsed Variational Dirichlet Process Mixture Models

Collapsed Variational Dirichlet Process Mixture Models Collapsed Variational Dirichlet Process Mixture Models Kenichi Kurihara Dept. of Computer Science Tokyo Institute of Technology, Japan kurihara@mi.cs.titech.ac.jp Max Welling Dept. of Computer Science

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

Machine Learning Summer School

Machine Learning Summer School Machine Learning Summer School Lecture 3: Learning parameters and structure Zoubin Ghahramani zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ Department of Engineering University of Cambridge,

More information

Nonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5)

Nonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5) Nonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5) Tamara Broderick ITT Career Development Assistant Professor Electrical Engineering & Computer Science MIT Bayes Foundations

More information

Nonparametric Factor Analysis with Beta Process Priors

Nonparametric Factor Analysis with Beta Process Priors Nonparametric Factor Analysis with Beta Process Priors John Paisley Lawrence Carin Department of Electrical & Computer Engineering Duke University, Durham, NC 7708 jwp4@ee.duke.edu lcarin@ee.duke.edu Abstract

More information

Shared Segmentation of Natural Scenes. Dependent Pitman-Yor Processes

Shared Segmentation of Natural Scenes. Dependent Pitman-Yor Processes Shared Segmentation of Natural Scenes using Dependent Pitman-Yor Processes Erik Sudderth & Michael Jordan University of California, Berkeley Parsing Visual Scenes sky skyscraper sky dome buildings trees

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Lecturer: David Blei Lecture #3 Scribes: Jordan Boyd-Graber and Francisco Pereira October 1, 2007

Lecturer: David Blei Lecture #3 Scribes: Jordan Boyd-Graber and Francisco Pereira October 1, 2007 COS 597C: Bayesian Nonparametrics Lecturer: David Blei Lecture # Scribes: Jordan Boyd-Graber and Francisco Pereira October, 7 Gibbs Sampling with a DP First, let s recapitulate the model that we re using.

More information

A Process over all Stationary Covariance Kernels

A Process over all Stationary Covariance Kernels A Process over all Stationary Covariance Kernels Andrew Gordon Wilson June 9, 0 Abstract I define a process over all stationary covariance kernels. I show how one might be able to perform inference that

More information

Flexible Regression Modeling using Bayesian Nonparametric Mixtures

Flexible Regression Modeling using Bayesian Nonparametric Mixtures Flexible Regression Modeling using Bayesian Nonparametric Mixtures Athanasios Kottas Department of Applied Mathematics and Statistics University of California, Santa Cruz Department of Statistics Brigham

More information

arxiv: v1 [stat.ml] 20 Nov 2012

arxiv: v1 [stat.ml] 20 Nov 2012 A survey of non-exchangeable priors for Bayesian nonparametric models arxiv:1211.4798v1 [stat.ml] 20 Nov 2012 Nicholas J. Foti 1 and Sinead Williamson 2 1 Department of Computer Science, Dartmouth College

More information

Distance dependent Chinese restaurant processes

Distance dependent Chinese restaurant processes David M. Blei Department of Computer Science, Princeton University 35 Olden St., Princeton, NJ 08540 Peter Frazier Department of Operations Research and Information Engineering, Cornell University 232

More information