On the posterior structure of NRMI

Size: px
Start display at page:

Download "On the posterior structure of NRMI"

Transcription

1 On the posterior structure of NRMI Igor Prünster University of Turin, Collegio Carlo Alberto and ICER Joint work with L.F. James and A. Lijoi Isaac Newton Institute, BNR Programme, 8th August 2007

2 Outline CRM and NRMI Completely random measures (CRM) NRMIs Relation to other random probability measures

3 Outline CRM and NRMI Completely random measures (CRM) NRMIs Relation to other random probability measures Posterior structure Conjugacy Posterior characterization Predictive distributions Generalized Pólya urn scheme The two parameter Poisson Dirichlet process

4 Outline CRM and NRMI Completely random measures (CRM) NRMIs Relation to other random probability measures Posterior structure Conjugacy Posterior characterization Predictive distributions Generalized Pólya urn scheme The two parameter Poisson Dirichlet process Hierarchical mixture models NRMI mixture model The posterior distribution of the mixture model

5 Outline CRM and NRMI Completely random measures (CRM) NRMIs Relation to other random probability measures Posterior structure Conjugacy Posterior characterization Predictive distributions Generalized Pólya urn scheme The two parameter Poisson Dirichlet process Hierarchical mixture models NRMI mixture model The posterior distribution of the mixture model Some concluding remarks

6 Completely random measures DEFINITION (Kingman, 1967). µ is a completely random measure (CRM) on (X, X ) if (i) µ( ) = 0 (ii) for any collection of disjoint sets in X, B 1, B 2,..., the random variables µ(b 1 ), µ(b 2 ),... are mutually independent and µ( j 1 B j ) = j 1 µ(b j )

7 Completely random measures DEFINITION (Kingman, 1967). µ is a completely random measure (CRM) on (X, X ) if (i) µ( ) = 0 (ii) for any collection of disjoint sets in X, B 1, B 2,..., the random variables µ(b 1 ), µ(b 2 ),... are mutually independent and µ( j 1 B j ) = j 1 µ(b j ) Let G ν = {g : g(x) µ(dx) < }. Then, µ is characterized by its X Laplace functional [ E e X g(x) µ(dx)] { } = exp [1 e v g(x) ] ν(dv, dx) R + X for any g G ν. In the following, denote by ψ( ) the Laplace exponent R + X [1 e v ] ν(dv, dx).

8 Completely random measures DEFINITION (Kingman, 1967). µ is a completely random measure (CRM) on (X, X ) if (i) µ( ) = 0 (ii) for any collection of disjoint sets in X, B 1, B 2,..., the random variables µ(b 1 ), µ(b 2 ),... are mutually independent and µ( j 1 B j ) = j 1 µ(b j ) Let G ν = {g : g(x) µ(dx) < }. Then, µ is characterized by its X Laplace functional [ E e X g(x) µ(dx)] = exp { } [1 e v g(x) ] ν(dv, dx) R + X for any g G ν. In the following, denote by ψ( ) the Laplace exponent R + X [1 e v ] ν(dv, dx). = µ is identified by the intensity ν (which represents the intensity of the underlying Poisson random measure).

9 Letting α be a non atomic and σ finite measure on X. According to the decomposition of ν we distinguish two classes of CRM: (a) if ν(dv, dx) = ρ(dv) α(dx), we say that µ is homogeneous; (b) if ν(dv, dx) = ρ(dv x) α(dx), we say that µ is non homogeneous.

10 Letting α be a non atomic and σ finite measure on X. According to the decomposition of ν we distinguish two classes of CRM: (a) if ν(dv, dx) = ρ(dv) α(dx), we say that µ is homogeneous; (b) if ν(dv, dx) = ρ(dv x) α(dx), we say that µ is non homogeneous. Necessary assumptions for the normalization to be well defined: (A) µ is almost surely finite R + X [1 e λv ] ν(dv, dx) < for every λ > 0 if µ is a homogeneous CRM α being a finite measure

11 Letting α be a non atomic and σ finite measure on X. According to the decomposition of ν we distinguish two classes of CRM: (a) if ν(dv, dx) = ρ(dv) α(dx), we say that µ is homogeneous; (b) if ν(dv, dx) = ρ(dv x) α(dx), we say that µ is non homogeneous. Necessary assumptions for the normalization to be well defined: (A) µ is almost surely finite R + X [1 e λv ] ν(dv, dx) < for every λ > 0 if µ is a homogeneous CRM α being a finite measure (B) µ is almost surely strictly positive ν(r +, X) = infinite activity of µ

12 Normalized random measures with independent increments (NRMIs) DEFINITION. Let µ be a CRM on (X, X ) satisfying (A) and (B). Then the random probability measure on (X, X ) given by P( ) = µ( ) µ(x) is well defined and termed normalized random measure with independent increments (NRMI).

13 Normalized random measures with independent increments (NRMIs) DEFINITION. Let µ be a CRM on (X, X ) satisfying (A) and (B). Then the random probability measure on (X, X ) given by P( ) = µ( ) µ(x) is well defined and termed normalized random measure with independent increments (NRMI). A NRMI is uniquely characterized by the intensity ν of the corresponding CRM µ: according to the structure of ν we will distinguish homogeneous and non homogeneous NRMI.

14 Special cases of NRMI 1. Dirichlet process: Let µ be a gamma CRM with α a finite measure on X and ν(dv, dx) = e v dv α(dx) v = NRMI is a Dirichlet process with parameter measure α.

15 Special cases of NRMI 1. Dirichlet process: Let µ be a gamma CRM with α a finite measure on X and ν(dv, dx) = e v dv α(dx) v = NRMI is a Dirichlet process with parameter measure α. 2. Normalized generalized gamma (GG) process: Let µ be a GG CRM (Brix, 99) with α a finite measure on X and α ν(dv, dx) = Γ(1 α) s 1 α e τs ds α(dx) = NRMI is a normalized GG process with parameter α.

16 Special cases of NRMI 1. Dirichlet process: Let µ be a gamma CRM with α a finite measure on X and ν(dv, dx) = e v dv α(dx) v = NRMI is a Dirichlet process with parameter measure α. 2. Normalized generalized gamma (GG) process: Let µ be a GG CRM (Brix, 99) with α a finite measure on X and α ν(dv, dx) = Γ(1 α) s 1 α e τs ds α(dx) = NRMI is a normalized GG process with parameter α. 3. Normalized extended gamma process: Let µ be an extended gamma CRM (Dykstra & Laud, 81) with ν(dv, dx) = e b(t)v dv dt v with b a strictly positive function and α s.t. µ(x) < a.s. = NRMI is a normalized extended gamma process with parameters α and b.

17 Relation to other random probability measures Homogeneous NRMI are members of the following families of random probability measures: (i) Species sampling models (Pitman, 96) are defined as P( ) = i 1 P i δ Xi ( ) + ( 1 i 1 P i ) H( ) where 0 < P i < 1 are random weights such that i 1 P i 1, independent of the locations X i, which are i.i.d. with some non atomic distribution H.

18 Relation to other random probability measures Homogeneous NRMI are members of the following families of random probability measures: (i) Species sampling models (Pitman, 96) are defined as P( ) = i 1 P i δ Xi ( ) + ( 1 i 1 P i ) H( ) where 0 < P i < 1 are random weights such that i 1 P i 1, independent of the locations X i, which are i.i.d. with some non atomic distribution H. Problem: concrete assignment of the random weights P i : Stick-breaking procedure (Ishwaran and James 2001). Remark: A non homogeneous NRMI is not a species sampling model: weights and locations are not independent.

19 Relation to other random probability measures Homogeneous NRMI are members of the following families of random probability measures: (i) Species sampling models (Pitman, 96) are defined as P( ) = i 1 P i δ Xi ( ) + ( 1 i 1 P i ) H( ) where 0 < P i < 1 are random weights such that i 1 P i 1, independent of the locations X i, which are i.i.d. with some non atomic distribution H. Problem: concrete assignment of the random weights P i : Stick-breaking procedure (Ishwaran and James 2001). Remark: A non homogeneous NRMI is not a species sampling model: weights and locations are not independent. (ii) Poisson Kingman models (Pitman, 03): More tractable than general species sampling models, but is still difficult to derive expressions for posterior quantities.

20 Characterization of the Dirichlet process (X n ) n 1 is a sequence of exchangeable observations with values in X governed by a NRMI.

21 Characterization of the Dirichlet process (X n ) n 1 is a sequence of exchangeable observations with values in X governed by a NRMI. A sample X (n) = (X 1,..., X n ) will contain: X1,..., X k (n) the k distinct observations in X n j > 0 the number of observations equal to Xj (j = 1,..., k).

22 Characterization of the Dirichlet process (X n ) n 1 is a sequence of exchangeable observations with values in X governed by a NRMI. A sample X (n) = (X 1,..., X n ) will contain: X1,..., X k (n) the k distinct observations in X n j > 0 the number of observations equal to Xj (j = 1,..., k). Let P be the set of all NRMIs and let P P. The posterior distribution of P, given X (n), is still in P if and only if P is a Dirichlet process.

23 Characterization of the Dirichlet process (X n ) n 1 is a sequence of exchangeable observations with values in X governed by a NRMI. A sample X (n) = (X 1,..., X n ) will contain: X1,..., X k (n) the k distinct observations in X n j > 0 the number of observations equal to Xj (j = 1,..., k). Let P be the set of all NRMIs and let P P. The posterior distribution of P, given X (n), is still in P if and only if P is a Dirichlet process. = CONJUGACY is a distinctive feature of the Dirichlet process.

24 Characterization of the Dirichlet process (X n ) n 1 is a sequence of exchangeable observations with values in X governed by a NRMI. A sample X (n) = (X 1,..., X n ) will contain: X1,..., X k (n) the k distinct observations in X n j > 0 the number of observations equal to Xj (j = 1,..., k). Let P be the set of all NRMIs and let P P. The posterior distribution of P, given X (n), is still in P if and only if P is a Dirichlet process. = CONJUGACY is a distinctive feature of the Dirichlet process. Nonetheless, conditionally on a latent variable U and the data X (n), the (posterior) NRMI P X (n), U is still a NRMI.

25 The latent variable U U is not an auxiliary variable: it has a precise meaning summarizing the normalization procedure and the distribution of µ(x).

26 The latent variable U U is not an auxiliary variable: it has a precise meaning summarizing the normalization procedure and the distribution of µ(x). τ m (u x) = R + s m e us ρ x (ds) for any m 1 and x X

27 The latent variable U U is not an auxiliary variable: it has a precise meaning summarizing the normalization procedure and the distribution of µ(x). τ m (u x) = s m e us ρ R + x (ds) for any m 1 and x X U 0 is a positive random variable with density f 0 (u) e ψ(u) τ(u x) η(dx) X

28 The latent variable U U is not an auxiliary variable: it has a precise meaning summarizing the normalization procedure and the distribution of µ(x). τ m (u x) = s m e us ρ R + x (ds) for any m 1 and x X U 0 is a positive random variable with density f 0 (u) e ψ(u) τ(u x) η(dx) U n is a positive random variable whose density, conditional on the data X (n), is for any n 1 f (u X (n) ) u n 1 e ψ(u) X k j=1 τ nj (u X j )

29 The latent variable U U is not an auxiliary variable: it has a precise meaning summarizing the normalization procedure and the distribution of µ(x). τ m (u x) = s m e us ρ R + x (ds) for any m 1 and x X U 0 is a positive random variable with density f 0 (u) e ψ(u) τ(u x) η(dx) U n is a positive random variable whose density, conditional on the data X (n), is for any n 1 f (u X (n) ) u n 1 e ψ(u) X k j=1 τ nj (u X j ) Remark: The distribution of (U X (n) ) is a mixture of gamma distributions with mixing measure the posterior total mass ( µ(x) X (n) ) y n f U X (n)(u) = Γ(n) un 1 e yu Q (dy X (n)) (0,+ ) where Q( X (n) ) denotes the posterior distribution of µ(x).

30 The posterior distribution of the CRM µ

31 The posterior distribution of the CRM µ The posterior distribution of µ, given X (n), is a mixture with respect to f (u X (n) )

32 The posterior distribution of the CRM µ The posterior distribution of µ, given X (n), is a mixture with respect to f (u X (n) ) Given U n = u and X (n), where µ d = µ u + k i=1 J (u) i δ X i

33 The posterior distribution of the CRM µ The posterior distribution of µ, given X (n), is a mixture with respect to f (u X (n) ) Given U n = u and X (n), where (i) jump J (u) i at Xi µ d = µ u + k i=1 J (u) i δ X i has density f i (s) s n i e us ρ X (ds) i

34 The posterior distribution of the CRM µ The posterior distribution of µ, given X (n), is a mixture with respect to f (u X (n) ) Given U n = u and X (n), where (i) jump J (u) i at Xi µ d = µ u + (ii) µ u is a CRM with intensity k i=1 J (u) i δ X i has density f i (s) s n i e us ρ X (ds) i ν (u) (dx, ds) = e us ρ x (ds) η(dx)

35 The posterior distribution of the CRM µ The posterior distribution of µ, given X (n), is a mixture with respect to f (u X (n) ) Given U n = u and X (n), where (i) jump J (u) i at Xi µ d = µ u + (ii) µ u is a CRM with intensity k i=1 J (u) i δ X i has density f i (s) s n i e us ρ X (ds) i ν (u) (dx, ds) = e us ρ x (ds) η(dx) (iii) µ u and J (u) i (i = 1,..., k) are independent.

36 The posterior distribution of the NRMI P It now follows easily that given U n and X (n), the (posterior) distribution of P is again a NRMI:

37 The posterior distribution of the NRMI P It now follows easily that given U n and X (n), the (posterior) distribution of P is again a NRMI: P U n, X (n) = d = µ u + k i=1 J(u) i µ u (X) + k δ X i d = w µ u + (1 w) µ u (X) with w = µ u (X)( µ u (X) + k i=1 J(u) i ) 1. i=1 J(u) i k i=1 J(u) i k r=1 J(u) r δ X i

38 The posterior distribution of the normalized GG process Let P be a normalized GG process. Then the (posterior) distribution of µ given U n and X (n), µ can be represented as where µ u + k i=1 J (u) i δ X i

39 The posterior distribution of the normalized GG process Let P be a normalized GG process. Then the (posterior) distribution of µ given U n and X (n), µ can be represented as where µ u + k i=1 J (u) i δ X i (i) µ u is a GG CRM with intensity measure ν (u) (ds, dx) = σ Γ(1 σ) s 1 σ e (u+1)s ds α(dx) (ii) the fixed points of discontinuity coincide with the distinct observations Xi, the jumps J i Gamma(u + 1, n i σ), for i = 1,..., k; (iii) µ (u) and J i (i = 1,..., k) are independent. Moreover, the distribution of U, conditional on X (n), is f (u X (n) ) un 1 e α(x)(1+u)σ (u + 1) n kσ.

40 Predictive distributions The (predictive) distribution of X n+1, given X (n), coincides with P[X n+1 dx n+1 X 1,..., X n ] = w (n) α(dx n+1 ) + 1 n k j=1 w (n) j δ X j (dx n+1 )

41 Predictive distributions The (predictive) distribution of X n+1, given X (n), coincides with P[X n+1 dx n+1 X 1,..., X n ] = w (n) α(dx n+1 ) + 1 n k j=1 w (n) j δ X j (dx n+1 ) where w (n) = 1 n + 0 u τ 1 (u x n+1 ) f (u X (n) ) du + w (n) j = 0 u τn +1(u X j j ) τ nj (u Xj f (u X (n) ) du )

42 Predictive distributions The (predictive) distribution of X n+1, given X (n), coincides with P[X n+1 dx n+1 X 1,..., X n ] = w (n) α(dx n+1 ) + 1 n k j=1 w (n) j δ X j (dx n+1 ) where w (n) = 1 n + 0 u τ 1 (u x n+1 ) f (u X (n) ) du + w (n) j = 0 u τn +1(u X j j ) τ nj (u Xj f (u X (n) ) du ) For the homogeneous case one obtains the predictive distributions of Pitman (2003).

43 Sampling from the marginal distribution of the X i s Note that, conditionally on U n = u, the predictive distribution is k P[X n+1 dx n+1 X (n), U n = u] κ 1 (u) τ 1 (u x n+1 ) α(dx n+1 )+ where κ 1 (u) = X τ 1(u x) α(dx). j=1 τ nj +1(u Xj ) τ nj (u Xj δ ) X (dx n+1 ) j

44 Sampling from the marginal distribution of the X i s Note that, conditionally on U n = u, the predictive distribution is k P[X n+1 dx n+1 X (n), U n = u] κ 1 (u) τ 1 (u x n+1 ) α(dx n+1 )+ j=1 τ nj +1(u Xj ) τ nj (u Xj δ ) X (dx n+1 ) j where κ 1 (u) = X τ 1(u x) α(dx). From this one can implement an analogue of the Pólya urn scheme in order to draw a sample X (n) from P

45 Sampling from the marginal distribution of the X i s Note that, conditionally on U n = u, the predictive distribution is k P[X n+1 dx n+1 X (n), U n = u] κ 1 (u) τ 1 (u x n+1 ) α(dx n+1 )+ j=1 τ nj +1(u Xj ) τ nj (u Xj δ ) X (dx n+1 ) j where κ 1 (u) = X τ 1(u x) α(dx). From this one can implement an analogue of the Pólya urn scheme in order to draw a sample X (n) from P Let m(dx u) τ 1 (u x) α(dx)

46 Sampling from the marginal distribution of the X i s Note that, conditionally on U n = u, the predictive distribution is k P[X n+1 dx n+1 X (n), U n = u] κ 1 (u) τ 1 (u x n+1 ) α(dx n+1 )+ j=1 τ nj +1(u Xj ) τ nj (u Xj δ ) X (dx n+1 ) j where κ 1 (u) = X τ 1(u x) α(dx). From this one can implement an analogue of the Pólya urn scheme in order to draw a sample X (n) from P Let For any i 2 set m(dx u) τ 1 (u x) α(dx) m(dx i x i 1, u) = P[X i dx i X i 1, U i 1 = u]

47 Generalization of a Pólya urn scheme 1) Sample U 0 from f 0 (u) 2) Sample X 1 from m(dx U 0 )

48 Generalization of a Pólya urn scheme 1) Sample U 0 from f 0 (u) 2) Sample X 1 from m(dx U 0 ) 3) At step i Sample U i 1 from f (u X (i 1) ) Generate ξ i from m(dξ U i 1 ) and X i from m(dx X 1,..., X i 1, U i 1 ) { ξi prob κ X i = 1 (U i 1 ) X j,i 1 prob τ nj,i 1 +1(U i 1 X j,i 1 )/τ n j,i 1 +1(U i 1 X j,i 1 ) where X j,i 1 is the j th distinct value among X 1,..., X i 1 and n j,i 1 = card{x s : X s = X j,i 1, s = 1,..., i 1}

49 Sampling the posterior random measure Recall that, given U n = u and X (n), µ d = µ u + k i=1 J(u) i δ X i

50 Sampling the posterior random measure Recall that, given U n = u and X (n), µ = d µ u + k Algorithm: i=1 J(u) i δ X i

51 Sampling the posterior random measure Recall that, given U n = u and X (n), µ = d µ u + k Algorithm: (1) Sample U n from f (u X (i 1) ) (2) Sample J (Un) i from the density f i (s) ds s n i e Uns ρ X (ds) i i=1 J(u) i δ X i

52 Sampling the posterior random measure Recall that, given U n = u and X (n), µ = d µ u + k Algorithm: (1) Sample U n from f (u X (i 1) ) (2) Sample J (Un) i from the density f i (s) ds s n i e Uns ρ X (ds) i i=1 J(u) i (3) Simulate a realization of the completely random measure µ (Un) with intensity measure ν (Un) (dx, ds) = e Uns ρ x (ds) η(dx) via the Ferguson and Klass algorithm. δ X i

53 The two parameter Poisson Dirichlet process The PD(σ, θ) can be represented (Pitman, 96) as species sampling model p i=1 iδ Xi with stick breaking weights i 1 ind iid p i = V i (1 V j ) V i beta(θ + iσ, 1 σ), X i H j=1

54 The two parameter Poisson Dirichlet process The PD(σ, θ) can be represented (Pitman, 96) as species sampling model p i=1 iδ Xi with stick breaking weights i 1 ind iid p i = V i (1 V j ) V i beta(θ + iσ, 1 σ), X i H j=1 Using this representation, in Pitman ( 96), it is shown that k P X (n) = d (1 pi ) P k (k) + i=1 where P (k) = PD(σ, θ + kσ) and (p 1,..., p k ) Dir(n 1 σ,..., n k σ, θ + kσ) j=1 p j δ X j

55 The two parameter Poisson Dirichlet process The PD(σ, θ) can be represented (Pitman, 96) as species sampling model p i=1 iδ Xi with stick breaking weights i 1 ind iid p i = V i (1 V j ) V i beta(θ + iσ, 1 σ), X i H j=1 Using this representation, in Pitman ( 96), it is shown that k P X (n) = d (1 pi ) P k (k) + where P (k) = PD(σ, θ + kσ) and (p 1,..., p k ) Dir(n 1 σ,..., n k σ, θ + kσ) The PD(σ, θ) process is also representable as normalized measure i=1 P( ) = φ( ) φ(x), but φ does not have independent increments (Pitman and Yor, 97). Indeed, the Laplace functional of φ is of the form E[e f (x) φ(dx) ] = 1 Γ(θ) 0 u θ 1 e 0 j=1 p j δ X j (u+f (x)) σ P 0 (dx) du

56 Identify a latent variable U n such that U n X (n) has density f (u X (n) ) = α Γ(k + θ/α) uθ+kα 1 e u α Then, given U n and X (n), the (posterior) distribution of ϕ coincides with the distribution of the random measure µ u + k i=1 where µ u is a GG CRM with intensity ν (u) (s) = J (u) i δ X i α Γ(1 α) s 1 α e u s (1) The jumps J (u) i Gamma(u, n i α). Finally, the jumps J (u) i (i = 1,..., k) and µ u are, conditional on U n, independent.

57 Hierarchical mixture models Y i X i ind f ( X i ) X i P iid P P NRMI

58 Hierarchical mixture models Y i X i ind f ( X i ) X i P iid P P NRMI Equivalently, Y (n) = (Y 1,..., Y n ) are exchangeable draws from the random density f ( ) = f ( x) P(dx). X

59 The posterior distribution of the mixture model

60 The posterior distribution of the mixture model The posterior density f, given the observations Y (n), is f ( x) P(dx Y (n) ) X where P(dx Y (n) ) is the (posterior) random probability measure whose distribution is P(dx Y (n) ) = d P(dp X (n) )P(dX (n) Y (n) ). with

61 The posterior distribution of the mixture model The posterior density f, given the observations Y (n), is f ( x) P(dx Y (n) ) X where P(dx Y (n) ) is the (posterior) random probability measure whose distribution is P(dx Y (n) ) = d P(dp X (n) )P(dX (n) Y (n) ). with P(dp X (n) ) is the (posterior) distribution of the NRMI P, given X (n)

62 The posterior distribution of the mixture model The posterior density f, given the observations Y (n), is f ( x) P(dx Y (n) ) X where P(dx Y (n) ) is the (posterior) random probability measure whose distribution is P(dx Y (n) ) = d P(dp X (n) )P(dX (n) Y (n) ). with P(dp X (n) ) is the (posterior) distribution of the NRMI P, given X (n) P(dX (n) Y (n) ) is determined via Bayes theorem as { n i=1 f (Y i X i ) } m(dx (n) ) { n i=1 f (Y i X i ) } m(dx (n) ) where m(dx (n) ) is the marginal distribution of the latent variables.

63 The posterior distribution of the mixture model The posterior density f, given the observations Y (n), is f ( x) P(dx Y (n) ) X where P(dx Y (n) ) is the (posterior) random probability measure whose distribution is P(dx Y (n) ) = d P(dp X (n) )P(dX (n) Y (n) ). with P(dp X (n) ) is the (posterior) distribution of the NRMI P, given X (n) P(dX (n) Y (n) ) is determined via Bayes theorem as { n i=1 f (Y i X i ) } m(dx (n) ) { n i=1 f (Y i X i ) } m(dx (n) ) where m(dx (n) ) is the marginal distribution of the latent variables. Remark: In any mixture model, the crucial point is the determination of a tractable expression for P(dp X (n) ): once available, by following Escobar and West (1995) the derivation of a simulation algorithm is trivial.

64 Some concluding remarks Question 1: is it preferable to specify a GG prior as mixing measure (which includes Dirichlet process as special case) or stick with the Dirichlet process and enrich it with hyperpriors? What about parsimony in model specification?

65 Some concluding remarks Question 1: is it preferable to specify a GG prior as mixing measure (which includes Dirichlet process as special case) or stick with the Dirichlet process and enrich it with hyperpriors? What about parsimony in model specification? Question 2: Do we need applied statistical motivations for the introduction of new classes of priors? E.g. beta process (Hjort, 90) introduced for survival analysis, but turned out to be also the de Finetti measure of the Indian Buffet Process. Random probability measures are objects of interest in their own well beyond what we may think: e.g. the distribution of a mean functional of the two parameter PD process is relevant for the study of phylogenetic trees.

66 Some concluding remarks Mixture model is not the only use one can make of discrete nonparametric priors: if the data come from a discrete distribution, then it is reasonable the model the data with a discrete nonparameteric prior (see Ramses talk). Simpler context and there you get a real feeling of the limitations of the Dirichlet process: prediction is not monotone in the number of observed species.

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Igor Prünster University of Turin, Collegio Carlo Alberto and ICER Joint work with P. Di Biasi and G. Peccati Workshop on Limit Theorems and Applications Paris, 16th January

More information

Dependent hierarchical processes for multi armed bandits

Dependent hierarchical processes for multi armed bandits Dependent hierarchical processes for multi armed bandits Federico Camerlenghi University of Bologna, BIDSA & Collegio Carlo Alberto First Italian meeting on Probability and Mathematical Statistics, Torino

More information

Truncation error of a superposed gamma process in a decreasing order representation

Truncation error of a superposed gamma process in a decreasing order representation Truncation error of a superposed gamma process in a decreasing order representation B julyan.arbel@inria.fr Í www.julyanarbel.com Inria, Mistis, Grenoble, France Joint work with Igor Pru nster (Bocconi

More information

Truncation error of a superposed gamma process in a decreasing order representation

Truncation error of a superposed gamma process in a decreasing order representation Truncation error of a superposed gamma process in a decreasing order representation Julyan Arbel Inria Grenoble, Université Grenoble Alpes julyan.arbel@inria.fr Igor Prünster Bocconi University, Milan

More information

Dependent Random Measures and Prediction

Dependent Random Measures and Prediction Dependent Random Measures and Prediction Igor Prünster University of Torino & Collegio Carlo Alberto 10th Bayesian Nonparametric Conference Raleigh, June 26, 2015 Joint wor with: Federico Camerlenghi,

More information

Bayesian Nonparametrics: some contributions to construction and properties of prior distributions

Bayesian Nonparametrics: some contributions to construction and properties of prior distributions Bayesian Nonparametrics: some contributions to construction and properties of prior distributions Annalisa Cerquetti Collegio Nuovo, University of Pavia, Italy Interview Day, CETL Lectureship in Statistics,

More information

Discussion of On simulation and properties of the stable law by L. Devroye and L. James

Discussion of On simulation and properties of the stable law by L. Devroye and L. James Stat Methods Appl (2014) 23:371 377 DOI 10.1007/s10260-014-0269-4 Discussion of On simulation and properties of the stable law by L. Devroye and L. James Antonio Lijoi Igor Prünster Accepted: 16 May 2014

More information

Slice sampling σ stable Poisson Kingman mixture models

Slice sampling σ stable Poisson Kingman mixture models ISSN 2279-9362 Slice sampling σ stable Poisson Kingman mixture models Stefano Favaro S.G. Walker No. 324 December 2013 www.carloalberto.org/research/working-papers 2013 by Stefano Favaro and S.G. Walker.

More information

Dependent mixture models: clustering and borrowing information

Dependent mixture models: clustering and borrowing information ISSN 2279-9362 Dependent mixture models: clustering and borrowing information Antonio Lijoi Bernardo Nipoti Igor Pruenster No. 32 June 213 www.carloalberto.org/research/working-papers 213 by Antonio Lijoi,

More information

On some distributional properties of Gibbs-type priors

On some distributional properties of Gibbs-type priors On some distributional properties of Gibbs-type priors Igor Prünster University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics Workshop ICERM, 21st September 2012 Joint work with: P. De Blasi,

More information

A Brief Overview of Nonparametric Bayesian Models

A Brief Overview of Nonparametric Bayesian Models A Brief Overview of Nonparametric Bayesian Models Eurandom Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin Also at Machine

More information

A marginal sampler for σ-stable Poisson-Kingman mixture models

A marginal sampler for σ-stable Poisson-Kingman mixture models A marginal sampler for σ-stable Poisson-Kingman mixture models joint work with Yee Whye Teh and Stefano Favaro María Lomelí Gatsby Unit, University College London Talk at the BNP 10 Raleigh, North Carolina

More information

Compound Random Measures

Compound Random Measures Compound Random Measures Jim Griffin (joint work with Fabrizio Leisen) University of Kent Introduction: Two clinical studies 3 CALGB8881 3 CALGB916 2 2 β 1 1 β 1 1 1 5 5 β 1 5 5 β Infinite mixture models

More information

New Dirichlet Mean Identities

New Dirichlet Mean Identities Hong Kong University of Science and Technology Isaac Newton Institute, August 10, 2007 Origins CIFARELLI, D. M. and REGAZZINI, E. (1979). Considerazioni generali sull impostazione bayesiana di problemi

More information

Foundations of Nonparametric Bayesian Methods

Foundations of Nonparametric Bayesian Methods 1 / 27 Foundations of Nonparametric Bayesian Methods Part II: Models on the Simplex Peter Orbanz http://mlg.eng.cam.ac.uk/porbanz/npb-tutorial.html 2 / 27 Tutorial Overview Part I: Basics Part II: Models

More information

Bayesian nonparametric latent feature models

Bayesian nonparametric latent feature models Bayesian nonparametric latent feature models Indian Buffet process, beta process, and related models François Caron Department of Statistics, Oxford Applied Bayesian Statistics Summer School Como, Italy

More information

On Consistency of Nonparametric Normal Mixtures for Bayesian Density Estimation

On Consistency of Nonparametric Normal Mixtures for Bayesian Density Estimation On Consistency of Nonparametric Normal Mixtures for Bayesian Density Estimation Antonio LIJOI, Igor PRÜNSTER, andstepheng.walker The past decade has seen a remarkable development in the area of Bayesian

More information

Hierarchical Mixture Modeling With Normalized Inverse-Gaussian Priors

Hierarchical Mixture Modeling With Normalized Inverse-Gaussian Priors Hierarchical Mixture Modeling With Normalized Inverse-Gaussian Priors Antonio LIJOI, RamsésH.MENA, and Igor PRÜNSTER In recent years the Dirichlet process prior has experienced a great success in the context

More information

On the Truncation Error of a Superposed Gamma Process

On the Truncation Error of a Superposed Gamma Process On the Truncation Error of a Superposed Gamma Process Julyan Arbel and Igor Prünster Abstract Completely random measures (CRMs) form a key ingredient of a wealth of stochastic models, in particular in

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

Unit-rate Poisson representations of completely random measures

Unit-rate Poisson representations of completely random measures Unit-rate Poisson representations of completely random measures Peter Orbanz and Sinead Williamson Abstract: Constructive definitions of discrete random measures, which specify a sampling procedure for

More information

arxiv: v1 [math.st] 28 Feb 2015

arxiv: v1 [math.st] 28 Feb 2015 Are Gibbs type priors the most natural generalization of the Dirichlet process? P. De Blasi 1, S. Favaro 1, A. Lijoi 2, R.H. Mena 3, I. Prünster 1 and M. Ruggiero 1 1 Università degli Studi di Torino and

More information

Bayesian Nonparametrics: Dirichlet Process

Bayesian Nonparametrics: Dirichlet Process Bayesian Nonparametrics: Dirichlet Process Yee Whye Teh Gatsby Computational Neuroscience Unit, UCL http://www.gatsby.ucl.ac.uk/~ywteh/teaching/npbayes2012 Dirichlet Process Cornerstone of modern Bayesian

More information

Bayesian Nonparametrics

Bayesian Nonparametrics Bayesian Nonparametrics Peter Orbanz Columbia University PARAMETERS AND PATTERNS Parameters P(X θ) = Probability[data pattern] 3 2 1 0 1 2 3 5 0 5 Inference idea data = underlying pattern + independent

More information

Normalized kernel-weighted random measures

Normalized kernel-weighted random measures Normalized kernel-weighted random measures Jim Griffin University of Kent 1 August 27 Outline 1 Introduction 2 Ornstein-Uhlenbeck DP 3 Generalisations Bayesian Density Regression We observe data (x 1,

More information

Priors for Random Count Matrices with Random or Fixed Row Sums

Priors for Random Count Matrices with Random or Fixed Row Sums Priors for Random Count Matrices with Random or Fixed Row Sums Mingyuan Zhou Joint work with Oscar Madrid and James Scott IROM Department, McCombs School of Business Department of Statistics and Data Sciences

More information

Controlling the reinforcement in Bayesian non-parametric mixture models

Controlling the reinforcement in Bayesian non-parametric mixture models J. R. Statist. Soc. B (2007) 69, Part 4, pp. 715 740 Controlling the reinforcement in Bayesian non-parametric mixture models Antonio Lijoi, Università degli Studi di Pavia, Italy Ramsés H. Mena Universidad

More information

On the Stick-Breaking Representation for Homogeneous NRMIs

On the Stick-Breaking Representation for Homogeneous NRMIs Bayesian Analysis 216 11, Number3,pp.697 724 On the Stick-Breaking Representation for Homogeneous NRMIs S. Favaro,A.Lijoi,C.Nava,B.Nipoti,I.Prünster,andY.W.Teh Abstract. In this paper, we consider homogeneous

More information

Bayesian nonparametric models of sparse and exchangeable random graphs

Bayesian nonparametric models of sparse and exchangeable random graphs Bayesian nonparametric models of sparse and exchangeable random graphs F. Caron & E. Fox Technical Report Discussion led by Esther Salazar Duke University May 16, 2014 (Reading group) May 16, 2014 1 /

More information

arxiv: v2 [math.st] 27 May 2014

arxiv: v2 [math.st] 27 May 2014 Full Bayesian inference with hazard mixture models Julyan Arbel a, Antonio Lijoi b,a, Bernardo Nipoti c,a, arxiv:145.6628v2 [math.st] 27 May 214 a Collegio Carlo Alberto, via Real Collegio, 3, 124 Moncalieri,

More information

arxiv: v2 [math.st] 22 Apr 2016

arxiv: v2 [math.st] 22 Apr 2016 arxiv:1410.6843v2 [math.st] 22 Apr 2016 Posteriors, conjugacy, and exponential families for completely random measures Tamara Broderick Ashia C. Wilson Michael I. Jordan April 25, 2016 Abstract We demonstrate

More information

On Simulations form the Two-Parameter. Poisson-Dirichlet Process and the Normalized. Inverse-Gaussian Process

On Simulations form the Two-Parameter. Poisson-Dirichlet Process and the Normalized. Inverse-Gaussian Process On Simulations form the Two-Parameter arxiv:1209.5359v1 [stat.co] 24 Sep 2012 Poisson-Dirichlet Process and the Normalized Inverse-Gaussian Process Luai Al Labadi and Mahmoud Zarepour May 8, 2018 ABSTRACT

More information

Some Developments of the Normalized Random Measures with Independent Increments

Some Developments of the Normalized Random Measures with Independent Increments Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 46-487 c 2006, Indian Statistical Institute Some Developments of the Normalized Random Measures with Independent Increments Laura

More information

Full Bayesian inference with hazard mixture models

Full Bayesian inference with hazard mixture models ISSN 2279-9362 Full Bayesian inference with hazard mixture models Julyan Arbel Antonio Lijoi Bernardo Nipoti No. 381 December 214 www.carloalberto.org/research/working-papers 214 by Julyan Arbel, Antonio

More information

Bayesian nonparametrics

Bayesian nonparametrics Bayesian nonparametrics 1 Some preliminaries 1.1 de Finetti s theorem We will start our discussion with this foundational theorem. We will assume throughout all variables are defined on the probability

More information

Bayesian nonparametric models for bipartite graphs

Bayesian nonparametric models for bipartite graphs Bayesian nonparametric models for bipartite graphs François Caron Department of Statistics, Oxford Statistics Colloquium, Harvard University November 11, 2013 F. Caron 1 / 27 Bipartite networks Readers/Customers

More information

Hybrid Dirichlet processes for functional data

Hybrid Dirichlet processes for functional data Hybrid Dirichlet processes for functional data Sonia Petrone Università Bocconi, Milano Joint work with Michele Guindani - U.T. MD Anderson Cancer Center, Houston and Alan Gelfand - Duke University, USA

More information

Colouring and breaking sticks, pairwise coincidence losses, and clustering expression profiles

Colouring and breaking sticks, pairwise coincidence losses, and clustering expression profiles Colouring and breaking sticks, pairwise coincidence losses, and clustering expression profiles Peter Green and John Lau University of Bristol P.J.Green@bristol.ac.uk Isaac Newton Institute, 11 December

More information

2 Ishwaran and James: Gibbs sampling stick-breaking priors Our second method, the blocked Gibbs sampler, works in greater generality in that it can be

2 Ishwaran and James: Gibbs sampling stick-breaking priors Our second method, the blocked Gibbs sampler, works in greater generality in that it can be Hemant ISHWARAN 1 and Lancelot F. JAMES 2 Gibbs Sampling Methods for Stick-Breaking Priors A rich and exible class of random probability measures, which we call stick-breaking priors, can be constructed

More information

Poisson Latent Feature Calculus for Generalized Indian Buffet Processes

Poisson Latent Feature Calculus for Generalized Indian Buffet Processes Poisson Latent Feature Calculus for Generalized Indian Buffet Processes Lancelot F. James (paper from arxiv [math.st], Dec 14) Discussion by: Piyush Rai January 23, 2015 Lancelot F. James () Poisson Latent

More information

Dirichlet Processes: Tutorial and Practical Course

Dirichlet Processes: Tutorial and Practical Course Dirichlet Processes: Tutorial and Practical Course (updated) Yee Whye Teh Gatsby Computational Neuroscience Unit University College London August 2007 / MLSS Yee Whye Teh (Gatsby) DP August 2007 / MLSS

More information

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017 Bayesian Statistics Debdeep Pati Florida State University April 3, 2017 Finite mixture model The finite mixture of normals can be equivalently expressed as y i N(µ Si ; τ 1 S i ), S i k π h δ h h=1 δ h

More information

Bayesian Nonparametric Models on Decomposable Graphs

Bayesian Nonparametric Models on Decomposable Graphs Bayesian Nonparametric Models on Decomposable Graphs François Caron INRIA Bordeaux Sud Ouest Institut de Mathématiques de Bordeaux University of Bordeaux, France francois.caron@inria.fr Arnaud Doucet Departments

More information

On a coverage model in communications and its relations to a Poisson-Dirichlet process

On a coverage model in communications and its relations to a Poisson-Dirichlet process On a coverage model in communications and its relations to a Poisson-Dirichlet process B. Błaszczyszyn Inria/ENS Simons Conference on Networks and Stochastic Geometry UT Austin Austin, 17 21 May 2015 p.

More information

Lecture 2: Priors and Conjugacy

Lecture 2: Priors and Conjugacy Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.

More information

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I X i Ν CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr 2004 Dirichlet Process I Lecturer: Prof. Michael Jordan Scribe: Daniel Schonberg dschonbe@eecs.berkeley.edu 22.1 Dirichlet

More information

Beta processes, stick-breaking, and power laws

Beta processes, stick-breaking, and power laws Beta processes, stick-breaking, and power laws T. Broderick, M. Jordan, J. Pitman Presented by Jixiong Wang & J. Li November 17, 2011 DP vs. BP Dirichlet Process Beta Process DP vs. BP Dirichlet Process

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Exchangeable random partitions for statistical and economic modelling

Exchangeable random partitions for statistical and economic modelling Exchangeable random partitions for statistical and economic modelling Antonio Lijoi 1, Pietro Muliere 2, Igor Prünster 3 and Filippo Taddei 4 1 Dipartimento Economia Politica e Metodi Quantitatavi, Università

More information

arxiv: v1 [stat.ml] 20 Nov 2012

arxiv: v1 [stat.ml] 20 Nov 2012 A survey of non-exchangeable priors for Bayesian nonparametric models arxiv:1211.4798v1 [stat.ml] 20 Nov 2012 Nicholas J. Foti 1 and Sinead Williamson 2 1 Department of Computer Science, Dartmouth College

More information

Improper mixtures and Bayes s theorem

Improper mixtures and Bayes s theorem and Bayes s theorem and Han Han Department of Statistics University of Chicago DASF-III conference Toronto, March 2010 Outline Bayes s theorem 1 Bayes s theorem 2 Bayes s theorem Non-Bayesian model: Domain

More information

An adaptive truncation method for inference in Bayesian nonparametric models

An adaptive truncation method for inference in Bayesian nonparametric models An adaptive truncation method for inference in Bayesian nonparametric models arxiv:1308.045v [stat.co] 1 May 014 J.E. Griffin School of Mathematics, Statistics and Actuarial Science, University of Kent,

More information

CS Lecture 19. Exponential Families & Expectation Propagation

CS Lecture 19. Exponential Families & Expectation Propagation CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces

More information

Bayesian Nonparametric Models

Bayesian Nonparametric Models Bayesian Nonparametric Models David M. Blei Columbia University December 15, 2015 Introduction We have been looking at models that posit latent structure in high dimensional data. We use the posterior

More information

Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods

Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods Universidad Carlos III de Madrid Repositorio institucional e-archivo Tesis http://e-archivo.uc3m.es Tesis Doctorales 2016-02-29 Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods

More information

Dirichlet Process. Yee Whye Teh, University College London

Dirichlet Process. Yee Whye Teh, University College London Dirichlet Process Yee Whye Teh, University College London Related keywords: Bayesian nonparametrics, stochastic processes, clustering, infinite mixture model, Blackwell-MacQueen urn scheme, Chinese restaurant

More information

Bayesian Nonparametric Models for Ranking Data

Bayesian Nonparametric Models for Ranking Data Bayesian Nonparametric Models for Ranking Data François Caron 1, Yee Whye Teh 1 and Brendan Murphy 2 1 Dept of Statistics, University of Oxford, UK 2 School of Mathematical Sciences, University College

More information

Infinite latent feature models and the Indian Buffet Process

Infinite latent feature models and the Indian Buffet Process p.1 Infinite latent feature models and the Indian Buffet Process Tom Griffiths Cognitive and Linguistic Sciences Brown University Joint work with Zoubin Ghahramani p.2 Beyond latent classes Unsupervised

More information

Two Useful Bounds for Variational Inference

Two Useful Bounds for Variational Inference Two Useful Bounds for Variational Inference John Paisley Department of Computer Science Princeton University, Princeton, NJ jpaisley@princeton.edu Abstract We review and derive two lower bounds on the

More information

Two Tales About Bayesian Nonparametric Modeling

Two Tales About Bayesian Nonparametric Modeling Two Tales About Bayesian Nonparametric Modeling Pierpaolo De Blasi Stefano Favaro Antonio Lijoi Ramsés H. Mena Igor Prünster Abstract Gibbs type priors represent a natural generalization of the Dirichlet

More information

Stat 451 Lecture Notes Numerical Integration

Stat 451 Lecture Notes Numerical Integration Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29

More information

Distance-Based Probability Distribution for Set Partitions with Applications to Bayesian Nonparametrics

Distance-Based Probability Distribution for Set Partitions with Applications to Bayesian Nonparametrics Distance-Based Probability Distribution for Set Partitions with Applications to Bayesian Nonparametrics David B. Dahl August 5, 2008 Abstract Integration of several types of data is a burgeoning field.

More information

Bayesian nonparametric model for sparse dynamic networks

Bayesian nonparametric model for sparse dynamic networks Bayesian nonparametric model for sparse dynamic networks Konstantina Palla, François Caron and Yee Whye Teh 23rd of February, 2018 University of Glasgow Konstantina Palla 1 / 38 BIG PICTURE Interested

More information

Bayesian estimation of the discrepancy with misspecified parametric models

Bayesian estimation of the discrepancy with misspecified parametric models Bayesian estimation of the discrepancy with misspecified parametric models Pierpaolo De Blasi University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics workshop ICERM, 17-21 September 2012

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

Lecture 3a: Dirichlet processes

Lecture 3a: Dirichlet processes Lecture 3a: Dirichlet processes Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced Topics

More information

Math 362, Problem set 1

Math 362, Problem set 1 Math 6, roblem set Due //. (4..8) Determine the mean variance of the mean X of a rom sample of size 9 from a distribution having pdf f(x) = 4x, < x

More information

Slice Sampling Mixture Models

Slice Sampling Mixture Models Slice Sampling Mixture Models Maria Kalli, Jim E. Griffin & Stephen G. Walker Centre for Health Services Studies, University of Kent Institute of Mathematics, Statistics & Actuarial Science, University

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Combinatorial Clustering and the Beta. Negative Binomial Process

Combinatorial Clustering and the Beta. Negative Binomial Process Combinatorial Clustering and the Beta 1 Negative Binomial Process Tamara Broderick, Lester Mackey, John Paisley, Michael I. Jordan Abstract arxiv:1111.1802v5 [stat.me] 10 Jun 2013 We develop a Bayesian

More information

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process John Paisley Department of Computer Science Princeton University, Princeton, NJ jpaisley@princeton.edu Abstract We give a simple

More information

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu Lecture 16-17: Bayesian Nonparametrics I STAT 6474 Instructor: Hongxiao Zhu Plan for today Why Bayesian Nonparametrics? Dirichlet Distribution and Dirichlet Processes. 2 Parameter and Patterns Reference:

More information

Defining Predictive Probability Functions for Species Sampling Models

Defining Predictive Probability Functions for Species Sampling Models Defining Predictive Probability Functions for Species Sampling Models Jaeyong Lee Department of Statistics, Seoul National University leejyc@gmail.com Fernando A. Quintana Departamento de Estadísica, Pontificia

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Beta processes, stick-breaking, and power laws

Beta processes, stick-breaking, and power laws Beta processes, stick-breaking, and power laws Tamara Broderick Michael Jordan Jim Pitman Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-211-125

More information

Bayesian Point Process Modeling for Extreme Value Analysis, with an Application to Systemic Risk Assessment in Correlated Financial Markets

Bayesian Point Process Modeling for Extreme Value Analysis, with an Application to Systemic Risk Assessment in Correlated Financial Markets Bayesian Point Process Modeling for Extreme Value Analysis, with an Application to Systemic Risk Assessment in Correlated Financial Markets Athanasios Kottas Department of Applied Mathematics and Statistics,

More information

Exchangeable random partitions and random discrete probability measures: a brief tour guided by the Dirichlet Process

Exchangeable random partitions and random discrete probability measures: a brief tour guided by the Dirichlet Process Exchangeable random partitions and random discrete probability measures: a brief tour guided by the Dirichlet Process Notes for Oxford Statistics Grad Lecture Benjamin Bloem-Reddy benjamin.bloem-reddy@stats.ox.ac.uk

More information

STAT Advanced Bayesian Inference

STAT Advanced Bayesian Inference 1 / 32 STAT 625 - Advanced Bayesian Inference Meng Li Department of Statistics Jan 23, 218 The Dirichlet distribution 2 / 32 θ Dirichlet(a 1,...,a k ) with density p(θ 1,θ 2,...,θ k ) = k j=1 Γ(a j) Γ(

More information

arxiv: v3 [math.st] 30 Nov 2018

arxiv: v3 [math.st] 30 Nov 2018 TRUNCATED RANDOM MEASURES arxiv:163.861v3 [math.st] 3 Nov 218 TREVOR CAMPBELL, JONATHAN H. HUGGINS, JONATHAN HOW, AND TAMARA BRODERICK Abstract. Completely random measures CRMs) and their normalizations

More information

Local-Mass Preserving Prior Distributions for Nonparametric Bayesian Models

Local-Mass Preserving Prior Distributions for Nonparametric Bayesian Models Bayesian Analysis (2014) 9, Number 2, pp. 307 330 Local-Mass Preserving Prior Distributions for Nonparametric Bayesian Models Juhee Lee Steven N. MacEachern Yiling Lu Gordon B. Mills Abstract. We address

More information

Bayesian semiparametric analysis of short- and long- term hazard ratios with covariates

Bayesian semiparametric analysis of short- and long- term hazard ratios with covariates Bayesian semiparametric analysis of short- and long- term hazard ratios with covariates Department of Statistics, ITAM, Mexico 9th Conference on Bayesian nonparametrics Amsterdam, June 10-14, 2013 Contents

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

Feature Allocations, Probability Functions, and Paintboxes

Feature Allocations, Probability Functions, and Paintboxes Feature Allocations, Probability Functions, and Paintboxes Tamara Broderick, Jim Pitman, Michael I. Jordan Abstract The problem of inferring a clustering of a data set has been the subject of much research

More information

Hyperparameter estimation in Dirichlet process mixture models

Hyperparameter estimation in Dirichlet process mixture models Hyperparameter estimation in Dirichlet process mixture models By MIKE WEST Institute of Statistics and Decision Sciences Duke University, Durham NC 27706, USA. SUMMARY In Bayesian density estimation and

More information

Hierarchical Models & Bayesian Model Selection

Hierarchical Models & Bayesian Model Selection Hierarchical Models & Bayesian Model Selection Geoffrey Roeder Departments of Computer Science and Statistics University of British Columbia Jan. 20, 2016 Contact information Please report any typos or

More information

Predictivist Bayes density estimation

Predictivist Bayes density estimation Predictivist Bayes density estimation P. Richard Hahn Abstract: This paper develops a novel computational approach for Bayesian density estimation, using a kernel density representation of the Bayesian

More information

Bayesian nonparametric models for bipartite graphs

Bayesian nonparametric models for bipartite graphs Bayesian nonparametric models for bipartite graphs François Caron INRIA IMB - University of Bordeaux Talence, France Francois.Caron@inria.fr Abstract We develop a novel Bayesian nonparametric model for

More information

Nonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5)

Nonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5) Nonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5) Tamara Broderick ITT Career Development Assistant Professor Electrical Engineering & Computer Science MIT Bayes Foundations

More information

A Fully Nonparametric Modeling Approach to. BNP Binary Regression

A Fully Nonparametric Modeling Approach to. BNP Binary Regression A Fully Nonparametric Modeling Approach to Binary Regression Maria Department of Applied Mathematics and Statistics University of California, Santa Cruz SBIES, April 27-28, 2012 Outline 1 2 3 Simulation

More information

Flexible Regression Modeling using Bayesian Nonparametric Mixtures

Flexible Regression Modeling using Bayesian Nonparametric Mixtures Flexible Regression Modeling using Bayesian Nonparametric Mixtures Athanasios Kottas Department of Applied Mathematics and Statistics University of California, Santa Cruz Department of Statistics Brigham

More information

The Indian Buffet Process: An Introduction and Review

The Indian Buffet Process: An Introduction and Review Journal of Machine Learning Research 12 (2011) 1185-1224 Submitted 3/10; Revised 3/11; Published 4/11 The Indian Buffet Process: An Introduction and Review Thomas L. Griffiths Department of Psychology

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Parameter Estimation December 14, 2015 Overview 1 Motivation 2 3 4 What did we have so far? 1 Representations: how do we model the problem? (directed/undirected). 2 Inference: given a model and partially

More information

Bayesian Nonparametrics: Models Based on the Dirichlet Process

Bayesian Nonparametrics: Models Based on the Dirichlet Process Bayesian Nonparametrics: Models Based on the Dirichlet Process Alessandro Panella Department of Computer Science University of Illinois at Chicago Machine Learning Seminar Series February 18, 2013 Alessandro

More information

Bayesian non-parametric model to longitudinally predict churn

Bayesian non-parametric model to longitudinally predict churn Bayesian non-parametric model to longitudinally predict churn Bruno Scarpa Università di Padova Conference of European Statistics Stakeholders Methodologists, Producers and Users of European Statistics

More information

Mixture Modeling for Marked Poisson Processes

Mixture Modeling for Marked Poisson Processes Mixture Modeling for Marked Poisson Processes Matthew A. Taddy taddy@chicagobooth.edu The University of Chicago Booth School of Business 5807 South Woodlawn Ave, Chicago, IL 60637, USA Athanasios Kottas

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Stat 451 Lecture Notes Simulating Random Variables

Stat 451 Lecture Notes Simulating Random Variables Stat 451 Lecture Notes 05 12 Simulating Random Variables Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 22 in Lange, and Chapter 2 in Robert & Casella 2 Updated:

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information