Hybrid Dirichlet processes for functional data

Similar documents
Bayesian Nonparametrics: Dirichlet Process

THE DIRICHLET LABELING PROCESS FOR CLUSTERING FUNCTIONAL DATA

Non-parametric Clustering with Dirichlet Processes

STAT Advanced Bayesian Inference

Non-Parametric Bayes

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017

Dirichlet Processes: Tutorial and Practical Course

Bayesian Nonparametric Autoregressive Models via Latent Variable Representation

A Brief Overview of Nonparametric Bayesian Models

On the posterior structure of NRMI

Bayesian Nonparametric Regression through Mixture Models

Spatial Bayesian Nonparametrics for Natural Image Segmentation

Bayesian nonparametrics

THE DIRICHLET LABELING PROCESS FOR CLUSTERING FUNCTIONAL DATA

Bayesian Nonparametrics for Speech and Signal Processing

Foundations of Nonparametric Bayesian Methods

Integrated Non-Factorized Variational Inference

Nonparametric Bayesian Methods (Gaussian Processes)

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Chapter 2. Data Analysis

Nonparametric Bayesian Methods - Lecture I

Bayesian Nonparametrics: Models Based on the Dirichlet Process

False Discovery Control in Spatial Multiple Testing

Modeling conditional distributions with mixture models: Theory and Inference

Nonparametric Bayesian modeling for dynamic ordinal regression relationships

STAT 518 Intro Student Presentation

GAUSSIAN PROCESS REGRESSION

Nonparametric Mixed Membership Models

Markov Chain Monte Carlo (MCMC)

Log Gaussian Cox Processes. Chi Group Meeting February 23, 2016

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

A short introduction to INLA and R-INLA

Flexible Regression Modeling using Bayesian Nonparametric Mixtures

Truncation error of a superposed gamma process in a decreasing order representation

Bayesian Nonparametrics

Image segmentation combining Markov Random Fields and Dirichlet Processes

Bayesian spatial hierarchical modeling for temperature extremes

On the Support of MacEachern s Dependent Dirichlet Processes and Extensions

Practical Bayesian Optimization of Machine Learning. Learning Algorithms

Variational Principal Components

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Bayesian Nonparametrics

Lecture 3a: Dirichlet processes

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures

Normalized kernel-weighted random measures

Bayesian Modeling of Conditional Distributions

Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu

Density Modeling and Clustering Using Dirichlet Diffusion Trees

Contents. Part I: Fundamentals of Bayesian Inference 1

Chris Bishop s PRML Ch. 8: Graphical Models

PMR Learning as Inference

Scaling up Bayesian Inference

13: Variational inference II

Improper mixtures and Bayes s theorem

Dependent hierarchical processes for multi armed bandits

Bayesian Nonparametric Models

Bayesian Nonparametrics: some contributions to construction and properties of prior distributions

COS513 LECTURE 8 STATISTICAL CONCEPTS

Exchangeability. Peter Orbanz. Columbia University

On prediction and density estimation Peter McCullagh University of Chicago December 2004

Non-parametric Bayesian Methods

A Fully Nonparametric Modeling Approach to. BNP Binary Regression

Bayesian Inference for Dirichlet-Multinomials

Colouring and breaking sticks, pairwise coincidence losses, and clustering expression profiles

Dirichlet Processes and other non-parametric Bayesian models

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process

A Bayesian Nonparametric Model for Predicting Disease Status Using Longitudinal Profiles

Advanced Machine Learning

Bayesian non-parametric model to longitudinally predict churn

STA 4273H: Statistical Machine Learning

Dirichlet Process. Yee Whye Teh, University College London

Gaussian processes for inference in stochastic differential equations

A Nonparametric Model for Stationary Time Series

On Simulations form the Two-Parameter. Poisson-Dirichlet Process and the Normalized. Inverse-Gaussian Process

Bayesian Nonparametric Inference Methods for Mean Residual Life Functions

Nonparametric Bayes tensor factorizations for big data

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Particle Learning for General Mixtures

Dependent mixture models: clustering and borrowing information

Default Priors and Effcient Posterior Computation in Bayesian

Asymptotics for posterior hazards

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Stochastic Processes, Kernel Regression, Infinite Mixture Models

Introduction to Probabilistic Machine Learning

A comparative review of variable selection techniques for covariate dependent Dirichlet process mixture models

Monte Carlo conditioning on a sufficient statistic

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts

Principles of Bayesian Inference

Bayesian Machine Learning

Disease mapping with Gaussian processes

Bayesian PalaeoClimate Reconstruction from proxies:

STA 4273H: Statistical Machine Learning

3 Comparison with Other Dummy Variable Methods

Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood

CSC 2541: Bayesian Methods for Machine Learning

Study Notes on the Latent Dirichlet Allocation

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

Transcription:

Hybrid Dirichlet processes for functional data Sonia Petrone Università Bocconi, Milano Joint work with Michele Guindani - U.T. MD Anderson Cancer Center, Houston and Alan Gelfand - Duke University, USA Cambridge, August 2007

Outline the problem: inference for functional data with individual heterogeneity Mixture of Gaussian processes finite mixtures : DP k functional Dirichlet process mixtures problem: as many clusters as the sample size... global random partition Hybrid DP processes local random effects and clustering dependent local random partitions Examples: simulated data brain images

The problem Functional data: in principle, data are realizations of a random curve (stochastic process) Y = (Y (x), x D) We observe replicates of the curve at coordinates (x 1,..., x m ) Y i = (Y i (x 1 ),..., Y i (x m )), i = 1,..., n Problem: Bayesian inference for functional data in presence of random effects and heterogeneity

examples time series regression curves Y i (x), x 0 (x time) Y i (x), x D R p (x covariates)... Dunson s tutorial, Cox s talk,... spatial data Y i (x), x D R 2 (x spatial coordinate) motivating example: brain RMI images for patient i, i = 1,..., n Y (x)=level of gray at coordinate x

Figure: Sections of MRI brain images showing the effect of the Alzheimer s disease on the amygdalar-hippocampal complex

The basic model The basic model assumes that the curves are realizations of a stochastic process Y i = {Y i (x), x D}, D R d, In particular, conditionally on the unknown parameters, Y i is a Gaussian process. or Y i (x) = θ(x) + ɛ i, i = 1,..., n, x D Y i (x) = z(x) β + θ(x) + ɛ i, i = 1,..., n z(x) β θ(x) ɛ i mean regressive term functional behavior, forgotten covariates... i.i.d pure noise term, ɛ i N(0, τ 2 )

Gaussian process Y i = (Y (x), x D), i = 1,..., n. Assume: Y i θ, τ i.i.d GP(θ, τ( )), a Gaussian process with mean function θ = (θ(x), x D) and simple covariance function τ(x, x) = τ 2, τ(x, x ) = 0 θ G 0 = GP(θ 0, σ(, )) a Gaussian process with covariance function expressing the functional dependence, e.g. σ(x, x ) = σ 2 exp( φ x x ); (other approaches: θ(x) modeled by basis-functions expansions with random coefficients).

Mixtures of Gaussian processes For modeling random effects, use a hierarchical model that is Y i (x) = θ i (x) + ɛ i, x D Y i θ i indep GP(θ i ; τ) θ i G i.i.d G G random prob measure on R D π. We have a mixture of Gaussian processes: Y i G i.i.d GP(θ, τ)dg(θ),. The finite dimensionals are mixtures of multivariate Normals.

Data are observed at coordinates (x 1,..., x n ): Y i = (Y i (x 1 ),..., Y i (x m )), and let θ i = (θ(x 1 ),..., θ(x m )). Y i θ i indep N m (θ i ; τ 2 I m ) θ i G x1,...,x m i.i.d G x1,...,x m G x1,...,x m π x1,...,x m, so that i.i.d Y i G x1,...,x m N m (θ, τ 2 I m )dg x1,...,x m (θ) Dependent priors on the mixing distributions G x1,...,x m, consistently for any m and (x 1,..., x m ) * π prior on G (prob. measure on R D ) functional prior ** π x1,...,x m corresponding prior on the finite-dimensional d.f. s G x1,...,x m (on R m )

Species sampling priors (Pitman, 1996) G = k p j δ θ j, k j=1 (proper s.s.p.) θ 1, θ 2,... i.i.d. G 0, non-atomic species in the population. p j proportion of species j. Describe the sampling of new species in an environment. It is characterized by the predictive structure. (θ i, i 1) exchangeable with de Finetti measure G iff θ 1 G 0 d θ n+1 θ 1,..., θ n, D n = d p j(n n)δ θ j + p d+1 (N n)g 0 where D n = number of distinct values (observed species) among θ 1,..., θ n N n = (N 1,n, N 2,n,..., N Dn,n) =number of elements of (θ 1,..., θ n) which j=1 belongs to the first, second,..., D nth species.

Random partitions A partition of the data into clusters or species is described by the configurations of the ties. A discrete prior on G implies a probability measure on the random partition. The allocation of the data into the different clusters is determined by the predictive rule for the θ i s. Note that, for functional data, the θ i are random curves

Examples: finite mixtures functional finite-dimensional DP: G fdp k ((α 1,k,..., α k,k ), G 0) G = k j=1 p jδ θ, k < j where θ 1, θ 2,... i.i.d G 0, independent on (p 1,..., p k ) Dir(α 1,k,..., α k,k ) (often, a symmetric Dirichlet Dir(α/k,..., α/k)). This is the usual prior for finite mixture models, but note that here the θ j are random curves, i.i.d. from G 0 on R D. Then k G x1,...,x m = p jδ θ j (x 1 ),...,θ j (xm) j=1 with the θ j = (θ j (x 1),..., θ j(x m)) s i.i.d. G 0,x1,...,x m, are dependent DP k processes.

Dirichlet process G functional Dirichlet process fdp (α k, G 0 ) G = j=1 p j δ θ, j (p j, j 1) GEM(α), θ j s i.i.d. G 0. Again, the usual DP, but here the support points are random curves. (The finite dimensionals G x1,...,x m are DDP). Under conditions, DP k DP. Theorem (Ishwaran+Zarepour, 2002) If max(α 1,..., α k ) 0 and k j=1 α j α, 0 < α < for k, the DP k ((α 1,..., α k ), G 0 ) converges weakly to a DP (αg 0 ).

Mixtures for functional data Y i = (Y i (x), x D), t = 1, 2,... random curves on D R p, observed on finite sets of coordinates. Heterogeneity among replications is described by a hierarchical model Y i θ i, τ θ i indep GP(θ i, τ) i.i.d G G π Allocation of the curves into clusters is determined by the species sampling of the individual parameters (curves) θ i. With a DP prior θ n+1 θ 1,..., θ n θ 1 G 0 α α + n G 0 + d j=1 n j α + n δ θ. j

Random partitions A discrete prior on G implies a probability measure on the random partition. BUT for functional data, this might produce too many clusters! A new species is envisaged even if the curve differs from the previous ones only at some coordinates...

toy example simulated data t= 3 t= 6 t= 9 t= 12 t= 15 t= 18 t= 21 t= 24 t= 27 t= 30 t= 33 t= 36 t= 39 t= 42 t= 45 t= 48 A DP prior would suggest as many clusters as the sample size! it can give a good fit but somehow misses the clustering purposes of the model

Dependent local partitions The DP k and the DP imply global random partitions. In other terms, the DP k or the DP model global evolution of the curves. More general notions of species might be natural. In the brain images example, we think of sick or healthy brains. But only some portions of the brain are affected by the desease. We want to allow hybrid species, where portions of the curve might belong to one species and others to a different one. This give rise to local clustering and dependent local random partitions

Hybrid species sampling priors We want to model local clustering where a curve can have some portions belonging to a group and other parts belonging to a different group. G 0 random probability measure on R m. Consider a sample θ 1,..., θ k from G 0. These are interpreted as pure species. Then define a random d.f. G as G = k j 1 =1 k j m=1 p(j 1,..., j m )δ θ 1,j1,...,θ m,jm G is discrete and selects hybrid species obtained by mixing the components of the θj vectors, with random probabilities p(j 1,..., j m ).

Hybrid Dirichlet processes hybrid DP k G = k j 1 =1 k j m=1 p(j 1,..., j m )δ θ 1,j1,...,θ m,jm where k < and p is a random prob. function on {1,..., k} m with p = (p(j 1,..., j m ), j i = 1,..., k) Dirichlet(αq k ). hybrid DP k = and p is a random prob. measure on {1, 2,...} m p DP (αq). Functional hdp k and hdp k If P G is a random probability measure on R D, define analogously functional hybrid DP k or DP.

Functional hybrid DP A probability measure G on R D is characterized by the consistent collection of finite-dimensional distributions {G x1,...,x m }. A prior π for a random prob measure on R D can be defined by saying how it selects a collection of finite-dimensional d.f. s {G x1,...,x m }, consistently. A functional hdp prior selects θ 1, θ 2,... i.i.d. from G 0 on R D pure species and a probability measure p on (1, 2,..., k) D, k and forms G x1,...,x m = k j 1 =1 k p x1,...,x m (j 1,..., j m)δ θ j1 (x 1 ),...,θ jm (xm) j m=1 for any choice of (x 1,..., x m). Note that the θ j s are i.i.d. from G 0,x1,...,x m. Since the {p x1,...,x m } and {G 0,x1,...,x m } are consistent, one can easily check that the {G 0,x1,...,x m } are consistent so define a probability measure G on R D.

Mixture models For the mixture of Gaussian processes model, or its finite dimensional version i.i.d Y i G x1,...,x m, τ N m (θ, τ 2 I m )dg x1,...,x m (θ) G x1,...,x m π x1,...,x m we can compare a DP k or DP prior with hybrid DP priors. The different clustering behavior is better understood by reformulating the model in terms of hidden factors.

DP mixtures: constant hidden factor The DP prior for the mixing distribution implies a constant hidden factor Z with values in {1, 2,...}, which acts globally on the entire curve, that is, θ i = θ j if Z i = j; Y i Z i = j, θ 1,..., θ n, τ 2 N(y i θ j, τ 2 I m ) Z 1,..., Z n p i.i.d p on(1, 2,...), (p 1, p 2,...) GEM(α) θ j i.i.d G 0.

The constant latent factor implies the global allocation of the curves Suppose Z = 3 implies θ i = θ 3, a yellow surface

hybrid DP mixtures: hidden process The hybrid DP prior implies a hidden process Z = (Z(x), x D) with values in {1,..., k} D, describing local random effects on each coordinate of the curve. Let Z i = (Z i (x 1 ),..., Z i (x m )) be the values of the latent process at coordinates (x 1,..., x m ). Z i is a vector of local labels θ i = θ (Z i ) = (θ j 1 (x 1 ),..., θ j m (x m )) if Z i = (j 1,..., j m ) A mixture model with mixing distribution G x1,...,x m = k j 1 =1 k j m=1 p x1,...,x m (j 1,..., j m )δ θ j1 (x 1 ),...,θ jm (xm) implies a vector of labels Z i with conditional distribution p x1,...,x m on (1,..., k) m.

The hybrid DP k mixture can be written in terms of Z(x) as Y i Z i, θ 1,..., θ k, τ 2 N m (θ (Z i ), τ 2 I m ), where θ (Z i ) = (θ j 1 (x 1 ),..., θ j m (x m )) if Z i = (j 1,..., j m ); Z 1,..., Z n p x1,...,x m p x1,...,x m where p x1,...,x m is the finite dimensional probability function of the conditional law p of the process Z; p x1,...,x m Dirichlet(α k q x1,...,x m ) and the θ j s are i.i.d. G 0,x 1,...,x m.

The hidden process Z allows local selection

Examples Let m = 2, and fix coordinates (x 1, x 2 ). Then P r(z(x 1 ) = i, Z(x 2 ) = j) = E(P r(z(x 1 ) = i, Z(x 2 ) = j p) = q(i, j), i, j = 1,..., k. 1. Constant latent factor. If q(i, j) = 1/k if i = j and zero otherwise, then the latent process Z is constant. The hfdp k reduces to a fdp k, which implies a global effect of the latent factor. 2. Independent local effects. If q(i, j) = 1/k 2, the latent factor acts independently at each coordinate of the curve.

Ex: general dependence, uniform marginals In finite mixtures, often one uses a symmetric Dirichlet(α/k,..., α/k). In our context, this means that the probability measure q on {1,..., k} D must have uniform marginals. 3. Discrete process with uniform marginals. A general process Z with values in {1,..., k} D and uniform marginals can be obtained by considering the copula of a continuous stochastic process and then discretizing it.

example: copula specification Let H x1,...,x m be the finite dim. distribution of a process with values in [0, 1] D, e.g. the copula of a Gaussian process. Take m = 2 here. Partition [0, 1] 2 into subsquares C k,j1,j 2 = ( j 1 1, j 1 k k ] (j 2 1 k, j 2 k ], j i = 1,..., k, i = 1, 2, Then let q x1,x 2 (i, j) = H(C k,j1,j 2 ), j i = 1,..., k, i = 1, 2. The marginals of q k are uniform on {1,..., k} and the dependence structure of H is reflected in the joint distribution q.

Some properties limits of finite mixtures In some applications, finite mixtures are more appropriate; the number of components might be known or be part of the decision problem (e.g., in multiple testing, one envisages k = 2 groups). The common prior in finite mixture models is a DP k. However, the behavior of the model for moderate or large k can be quite different from that shown for small k (Ishwaran and Zarepour, 2002). In fact, it depends on the limiting behavior of the DP k Under conditions, the DP k DP. What about the limiting behavior of hybrid DP k?

weak limit of the DP k Theorem (Ishwaran+Zarepour, 2002) If max(α 1,..., α k ) 0 and k j=1 α j α, 0 < α < for k, the DP k ((α 1,..., α k ), G 0 ) converges weakly to a DP (αg 0 ). The idea of the proof is as follows. Under the stated conditions, the ordered Dirichlet weights (Kingman, 1975) (p (1),..., p (k) ) d (p 1, p 2,...) Poisson-Dirichlet G = k j=1 pjδ θ j =d k i=1 p (i)δ θ i, since the support points θ j are i.i.d. G 0, thus G converges in distribution to G = j=1 p j δ θ j, which is a DP (αg 0) (Ferguson 1973,Pitman...).

Weak limit of hybrid DP k We have to take a different approach. 1 G hdp k (α k q k, G 0) = d mixture of DP DP (α k Q k )dµ k (Q k ), where Q k is a weighted empirical d.f. of the θ j i.i.d. G 0 Q k = Q k (θ 1, θ 2,...) = Analogous for the hdp prior. k j 1 =1 k q k (j 1,..., j m)δ θ 1,j1,...,θ m,jm j m=1 2 Suppose α k α. The limit of the hdp k depends on the limit of Q k. If Q k d Q, a non random prob measure on R m, then hdp k (α k q k, G 0) Q If Q is random, with prob law Q, then hdp k (α k q k, G 0) DP (αq)dµ(q).

examples 1. DP k. If q k (j 1,..., j m ) = q j,k if j 1 = = j m = j, j = 1,..., k = 0 otherwise, the hdp k reduces to a DP k. Q k is a weighted empirical d.f. of the θj s, j = 1,..., k. If α k α, 0 < α < and max(q k,1,..., q k,k ) 0, then Q k d G 0. Thus, the DP k DP (αg 0 ).

examples 2. Independence case. If then q k (j 1,..., j m ) = 1 k m, j 1,..., j m = 1,..., k, Q k = k j 1 =1 k j m=1 1 k m δ θ 1,j 1,...,θ m,jm is the product of the marginal empirical distributions of θ 1,..., θ k and it converges in distribution to k j=1 G 0,i. Then hdp k (α k q k, G 0 ) DP (α k j=1 G 0,i). If q k is a mixture of 1 and 2, hdp k DP (α (ag 0 + b m j=1 G 0,i)), a DP with a mixture base measure.

4. hdp limit If the sequence of probability measures q k converges in total variation to a measure q on {1, 2,...} m for k, then Q k converges in total variation to the random probability measure Q = j 1 =1 j m=1 q(j 1,..., j m )δ θ 1,j1,...,θ m,jm. Then the hdp k (α k q k, G 0 ) converges to the mixture of Dirichlet processes DP (αq )dµ(q ), which is a hdp (αq, G 0 ).

Examples spatial data The data are observations of different surfaces on a (regular) grid of coordinates: Y i = (Y i (x 1 ),..., Y i (x m )), i = 1,..., n. A model for spatial data is Y 1,..., Y n µ, θ 1,..., θ n τ 2 θ 1,..., θ n G x1,...,x m n N(y i µ + θ i, τ 2 I m ) i=1 i.i.d where G expresses the spatial dependence, G x1,...,x m G x1,...,x m π x1,...,x m

We compare two choices for π x1,...,x m (a) a DP k ((α 1,..., α k ), G 0,x1,...,x m ) where G 0,x1,...,x m = N m (0, σ 2 R m (φ)), the correlation matrix R m (φ) having element (i, j) = exp( φ x i x j ). (b) a hdp k (α k q, G 0,x1,...,x m ) where q is assigned via the copula construction, with H being the copula of a N m (0, γ 2 R m (φ )). We assign a prior on the hyperparameters (µ, τ 2, σ 2, φ, φ ) while we fix α = 1 and γ = 1.

Example Simulated data The simulated data are a toy version of the brain RMI data. We generated n = 48 surfaces on a regular grid of 14 19 points, as noisy realizations of hybrids of two base processes (θ j N m(µ j, σ 2 R(φ)), µ 1 = 60, µ 2 = 120, σ = 3 and φ = 0.5). Aim: compare the global clustering obtained by the mixture model with a DP k prior and the local clustering provided by the hybrid DP k prior.

Simulated data t= 3 t= 6 t= 9 t= 12 t= 15 t= 18 t= 21 t= 24 t= 27 t= 30 t= 33 t= 36 t= 39 t= 42 t= 45 t= 48

Simulated data For giving a good fit, the DP k and the DP need k n. The hybrid DP k gives a good fit for k = 2. t= 3 t= 13 t= 23 t= 33 Obs DPk (k=2) hdpk (k=2) DPk (k=10) hdpk (k=10) Figure: Image plots of the simulated data (first row) and the posterior predictive means, respectively for the DP k and the hdp k, with k = 2 and k = 10, for some replicates i = 3, 13, 23, 33.

The hdp k mixture model provides dependent local partitions of individuals, for each location x How describing efficiently the posterior distribution on the random partitions? For a fixed location x, through the number d n (x) of species at location x, and the size of the groups the values θ j (x) which characterize the observed species the composition of the groups.

posterior for d n (x) location 213 0.0 0.2 0.4 0.6 0.8 1.0 location 45 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 7 8 9 10 # species 0 1 2 3 4 5 6 7 8 9 10 # species Figure: Posterior number of clusters at two locations x 213 and x 45 under a hdp k (k = 10)

posterior for the groups parameters 1 2 3 4 40 20 0 20 40 hdp_k species location 45 1 2 3 4 5 6 7 8 9 10 60 40 20 0 20 40 60 DP_k species Figure: Boxplots of the ordered values of θ (x 45 ) corresponding to d(x 45 ) = 4, for the hdp k and DP k (k = 10)

Example Brain images.

Finite mixtures (k = 2) t= 2 t= 6 t= 12 t= 17 Obs DPk (k=2) hdpk (k=2) Figure: Image plots of observations and posterior predictive means for the DP k and hdp k (k = 2) for some individuals ((i = 2, 6, 12, 17))

multiple comparison For multiple comparison, one might consider posterior probability maps (Müller et al.(2004; 2007) for more formal Bayesian decision rules) t=2 t=17 120 140 160 180 120 140 160 180 90 100 120 140 90 100 120 140 Figure: Posterior probability maps of pixel impairment (we consider a pixel impaired if p(θ t (s) = min k (θk (s)) data) > 0.7 ) for two individuals.

Finite mixtures (k = 10) location 122 0.0 0.2 0.4 0.6 0.8 1.0 location 220 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 7 8 9 # species 0 1 2 3 4 5 6 7 8 9 # species Figure: Posterior number of clusters at two locations x 122 and x 220 under a hdp k (k = 10).

Finite mixtures (k = 10) hdp_k DP_k location 220 70 50 30 10 20 40 100 0 50 70 50 30 10 20 40 100 0 50 1 2 3 4 5 6 species 1 2 3 4 5 6 7 8 9 10 species Figure: Boxplots of the ordered values of θ (x 220 ) corresponding to d(x 220 ) = 6, for the hdp k and DP k (k = 10).

Summary We have studied a class of hybrid species priors for Bayesian inference with functional data with local random effects. DP mixtures only describe global random effects while for multivariate or functional data, new issues of dependent local partitions arise. Notions/applications in other areas: * Hidden processes (Green+Richardson (2001), Fernandez+Green (2002),..) * Population genetics: model partial or local evolution for arrays of genes or functions? (Ruggiero, Spano..) * multiple testing *... Computations are challenging. Monte Carlo methods needed, for complex, high dimensional state space.