Bayesian Mixture Modeling of Significant P Values: A Meta-Analytic Method to Estimate the Degree of Contamination from H 0 : Supplemental Material

Similar documents
Lecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu

Bayesian Nonparametrics

Bayesian nonparametrics

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

A Fully Nonparametric Modeling Approach to. BNP Binary Regression

Lecture 3a: Dirichlet processes

Computational statistics

David B. Dahl. Department of Statistics, and Department of Biostatistics & Medical Informatics University of Wisconsin Madison

Bayesian Networks in Educational Assessment

Foundations of Nonparametric Bayesian Methods

Bayesian Nonparametric Models

The Effect of Sample Composition on Inference for Random Effects Using Normal and Dirichlet Process Models

Bayesian Nonparametrics: Dirichlet Process

Quantifying the Price of Uncertainty in Bayesian Models

CMPS 242: Project Report

Non-Parametric Bayes

Modeling Individual Differences with Dirichlet Processes

CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I

Dirichlet Process. Yee Whye Teh, University College London

Flexible Regression Modeling using Bayesian Nonparametric Mixtures

Bayesian semiparametric modeling and inference with mixtures of symmetric distributions

A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process

A Stick-Breaking Construction of the Beta Process

A Nonparametric Model for Stationary Time Series

A Brief Overview of Nonparametric Bayesian Models

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix

On Simulations form the Two-Parameter. Poisson-Dirichlet Process and the Normalized. Inverse-Gaussian Process

Dirichlet Processes: Tutorial and Practical Course

19 : Bayesian Nonparametrics: The Indian Buffet Process. 1 Latent Variable Models and the Indian Buffet Process

FAV i R This paper is produced mechanically as part of FAViR. See for more information.

Sample Size Calculations for ROC Studies: Parametric Robustness and Bayesian Nonparametrics

Bayesian non-parametric model to longitudinally predict churn

16 : Approximate Inference: Markov Chain Monte Carlo

Distance-Based Probability Distribution for Set Partitions with Applications to Bayesian Nonparametrics

STAT Advanced Bayesian Inference

Bayesian Methods for Machine Learning

CSCI 5822 Probabilistic Model of Human and Machine Learning. Mike Mozer University of Colorado

A Nonparametric Bayesian Model for Multivariate Ordinal Data

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures

eqr094: Hierarchical MCMC for Bayesian System Reliability

A Process over all Stationary Covariance Kernels

CSC 2541: Bayesian Methods for Machine Learning

Bayesian Nonparametrics for Speech and Signal Processing

Inequality constrained hypotheses for ANOVA

Nonparametric Bayesian Methods - Lecture I

Markov Chain Monte Carlo methods

Advanced Statistical Modelling

Stochastic Processes, Kernel Regression, Infinite Mixture Models

Bayesian Nonparametric Regression for Diabetes Deaths

Hierarchical Dirichlet Processes with Random Effects

Abstract INTRODUCTION

IPSJ SIG Technical Report Vol.2014-MPS-100 No /9/25 1,a) 1 1 SNS / / / / / / Time Series Topic Model Considering Dependence to Multiple Topics S

Non-parametric Clustering with Dirichlet Processes

Bayesian Inference for Regression Parameters

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California

Lecturer: David Blei Lecture #3 Scribes: Jordan Boyd-Graber and Francisco Pereira October 1, 2007

Gentle Introduction to Infinite Gaussian Mixture Modeling

A Level-Set Hit-And-Run Sampler for Quasi- Concave Distributions

36-463/663: Hierarchical Linear Models

Supplementary Note on Bayesian analysis

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Markov Chain Monte Carlo in Practice

Gibbs Sampling for (Coupled) Infinite Mixture Models in the Stick Breaking Representation

Bayesian Inference: Concept and Practice

Hierarchical Dirichlet Processes

PUBLISHED VERSION Massachusetts Institute of Technology. As per received 14/05/ /08/

Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes. Yee Whye Teh

A Nonparametric Approach Using Dirichlet Process for Hierarchical Generalized Linear Mixed Models

Probabilistic Graphical Models

Tree-Based Inference for Dirichlet Process Mixtures

Study Notes on the Latent Dirichlet Allocation

Inference Control and Driving of Natural Systems

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University

Dirichlet Processes and other non-parametric Bayesian models

ntopic Organic Traffic Study

Nonparametric Bayesian Modeling for Multivariate Ordinal. Data

CPSC 540: Machine Learning

Estimating Income Distributions Using a Mixture of Gamma. Densities

Bayesian semiparametric inference for the accelerated failure time model using hierarchical mixture modeling with N-IG priors

A Dirichlet Process Mixture Model for Survival Outcome Data: Assessing Nationwide Kidney Transplant Centers

Dirichlet Enhanced Latent Semantic Analysis

Bayesian Phylogenetics:

Online Appendix to: Crises and Recoveries in an Empirical Model of. Consumption Disasters

Bayesian Machine Learning

Bayesian Nonparametrics: Models Based on the Dirichlet Process

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Hyperparameter estimation in Dirichlet process mixture models

Bayesian Nonparametric Predictive Modeling of Group Health Claims

An Alternative Infinite Mixture Of Gaussian Process Experts

Bayesian data analysis in practice: Three simple examples

A nonparametric Bayesian approach to inference for non-homogeneous. Poisson processes. Athanasios Kottas 1. (REVISED VERSION August 23, 2006)

Construction of Dependent Dirichlet Processes based on Poisson Processes

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian nonparametric models for bipartite graphs

Non-parametric Bayesian Modeling and Fusion of Spatio-temporal Information Sources

INTRODUCTION TO BAYESIAN ANALYSIS

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

Gibbs Sampling in Linear Models #2

Transcription:

Bayesian Mixture Modeling of Significant P Values: A Meta-Analytic Method to Estimate the Degree of Contamination from H 0 : Supplemental Material Quentin Frederik Gronau 1, Monique Duizer 1, Marjan Bakker 2, & Eric-Jan Wagenmakers 1 1 University of Amsterdam 2 Tilburg University The Dirichlet Process Mixture for P Values from H 1 In this section we will explain how the Dirichlet process can be used to model the distribution of p values from the alternative hypothesis (H 1 ). We exploit the fact that the Dirichlet process can be used as a prior on discrete probability distributions, i.e., every draw from a Dirichlet process yields a discrete probability distribution over countably infinite values. Although the Dirichlet process yields discrete probability distributions over countably infinite values, most of these values will have probability mass close to zero. Where the probability mass is located is determined by a base distribution G 0, the amount of values with probability mass close to zero is controlled by a precision parameter α. A common application of the Dirichlet process is to use it as a prior on the parameters of an infinite mixture model (e.g., Navarro, Griffiths, Steyvers, & Lee, 2006) where we observe data and assume that some of the observed data points will be similar to each other without placing an upper bound on the number of groups. Instead, we expect that as we observe more data, the number of groups will grow which is exactly the behavior we expect p values from H 1 to show. We assume that every probit-transformed significant p value from H 1 (i.e., Φ 1 (p H 1 i )) is drawn from a truncated normal distribution N( θ i ) T (, 1.64485) where the parameter vector θ i consists of a mean and a variance component. The normal distribution is truncated at Φ 1 (.05) = 1.64485, since we only consider significant p values. The parameter vector θ i for every probit-transformed p value is drawn from a discrete probability mass distribution G( ). This probability mass distribution itself is a draw from a Dirichlet process with concentration parameter α and base distribution G 0, i.e, the probability distribution G( ) is not fixed, but we put a Dirichlet process prior on this distribution. Since every draw from the Dirichlet process prior yields a discrete probability mass distribution G( ), several p values from H 1 can have identical parameter vectors θ i

P VALUE CONTAMINATION 2 which naturally leads to a clustering (one cluster consists of all p values that share the same parameter vector). The degree of clustering is determined by the precision parameter α. The model can be expressed as follows: Φ 1 (p H 1 i ) θ i N( θ i ) T (, 1.64485) θ i G G( ) G G 0, α DP ( G 0, α). Since θ i is a two-dimensional vector, G 0 and G( ) are two-dimensional distributions. We assume G 0 can be factorized into independent components for the mean and the (inverse) variance: For the mean, we assume a truncated standard normal distribution N(0, 1) T ( 6,0) and for the inverse variance, we use a truncated gamma distribution Gamma(01, 01) T (,1). The normal distribution is truncated at zero because right-skewed distributions on the interval [0,1] map onto normal distributions with mean smaller than zero when probittransforming the values. The lower truncation at -6 is due to practical reasons because smaller values are very unlikely with regard to possible observed effect sizes. The truncation of the gamma distribution at one implies that the variance of the probit-transformed p value distributions under H 1 cannot be smaller than the variance of the standard normal distribution corresponding to H 0. In the next section we will explain how the Dirichlet process prior can be implemented in JAGS (Plummer, 2003). The Dirichlet Process Prior: Stick-Breaking Construction The Dirichlet process can be represented in different ways (e.g., Blackwell & Mac- Queen, 1973); we will focus on the so-called stick-breaking construction (Sethuraman, 1994) which has the advantage that it allows implementation of the Dirichlet process in JAGS (Plummer, 2003) and WinBUGS (Spiegelhalter, Thomas, Best, & Lunn, 2003). Examples are provided in 6.7 of Congdon (2006), Ohlssen, Sharples, & Spiegelhalter (2007), 11.8 of Lunn, Jackson, Best, Thomas, & Spiegelhalter (2012), and Müller, Quintana, Jara, & Hanson (2015). The stick-breaking construction exploits the fact that sampling from the Dirichlet process prior can be decomposed into two parts, where one part corresponds to generating weights (i.e., probability mass values for the discrete probability mass distribution) and the other corresponds to generating the associated locations of the probability mass. The stick-breaking construction is illustrated in Figure 1. We start by sampling a value from the base distribution G 0. To facilitate explanation, we will first assume that G 0

P VALUE CONTAMINATION 3 G 0 q 1 q 2 q 3... (1 - q 1 ) 2 P(1 - q k ) k=1 3 P(1 - q k ) k=1 p 3... 1 p 2 p 1 Probability... q 2 q 1 q 3 Figure 1. Illustration of the stick-breaking construction of the Dirichlet process prior.

P VALUE CONTAMINATION 4 is a one-dimensional distribution, for example a standard normal distribution. Later, we will explain how to think of the stick-breaking construction when the base distribution is two-dimensional like in our application. After sampling a value from the base distribution G 0, we next need to determine a weight (i.e., the probability mass) for this value. To accomplish this, assume we have a stick of length one which is the total probability that can be assigned to the countably infinite number of values. Then we sample a value from the following beta distribution q i Beta(1, α), break the stick at this value and set the weight π 1 (i.e., the weight for the first value from G 0 ) equal to the length of the stick part that we broke off, i.e., π 1 = q 1. A more direct visual interpretation is that we take the part of the stick that was broken off, orient it vertically, and position it on the value sampled from the base distribution G 0. Subsequently, we sample another value from G 0 and generate another q. We take the remaining part of the stick which has length (1 q 1 ), break off the proportion q 2 and in this way obtain the weight for the second component (i.e., the length of the stick part broken off) which is π 2 = q 2 (1 q 1 ). Again, interpret this as vertically positioning the stick part that was broken off on the location of the value sampled from the base distribution G 0. In this manner we continue to obtain pairs of values and corresponding weights which, when doing this infinitely often, constructs a probability mass distribution over a countably infinite number of values. The weight for the j-th component is j 1 π j = q j (1 q k ). k=1 Since we started with a stick of length one, the length of the broken stick parts (i.e., the weights) will add up to one yielding a proper probability mass distribution. The obtained probability mass distribution corresponds to one draw from the Dirichlet process prior; another draw can be obtained by starting again with a stick of length one and repeating the described procedure. Note that this will yield a probability mass distribution that is different from the previous one. In our model, we use the Dirichlet process as a prior for the components of an infinite normal mixture model which is also known as Dirichlet process mixture (e.g., Navarro et al., 2006) to model the distribution of p values under the alternative hypothesis (H 1 ). The weights (i.e., the lengths of the different stick parts) correspond to the mixing weights

P VALUE CONTAMINATION 5 for the different components of the mixture model, the values that belong to the weights (i.e., the locations of the vertically oriented stick parts) correspond to the parameters of the different normal distribution components. Since every normal component of the mixture is characterized by a parameter vector consisting of a mean and a variance component, a more accurate way to think of the stick-breaking process for our model is to imagine placing stick parts on a plane rather than a line, where one dimension of the plane corresponds to the mean component, the other to the (inverse) variance component. As mentioned in the previous section, we assume that the mean and the (inverse) variance component are independent of each other: For the mean, we assume a truncated standard normal distribution N(0, 1) T ( 6,0) and for the inverse variance, we use a truncated gamma distribution Gamma(01, 01) T (,1). The stick-breaking construction illustrates that as the number of components increases, the weights will decrease stochastically where the rate of this decrease is determined by the precision parameter α. Smaller values of α lead to larger parts being broken off at every step, i.e., only few components of the normal mixture will have substantial mixing weights, which means that it is likely that only few components will be used for a given data set (i.e., small number of clusters). In contrast, larger α values lead to a smaller number of values with mass close to zero, i.e., a larger number of components will have substantial mixing weights, which will likely lead to more components being used for a given data set (i.e., larger number of clusters). As Antoniak (1974) pointed out, rather than pre-specifying α beforehand, a hyperprior can be placed on α which allows to learn the concentration of the Dirichlet process from the data. For our model, we used a uniform prior over a plausible range of values, i.e., α Unif(0.5, 7). For practical purposes, the stick-breaking process will be truncated (note that the maximum number of possible clusters for a given data set is equal to the number of data points). This means that we stop to break the stick at some point and set the last component equal to the length of the remaining stick which ensures a proper probability mass distribution. JAGS Model Code In order to use the model code below, the following data need to be passed to JAGS: qp, i.e., probitized p values (e.g., in R can be obtained via qnorm(pvalues)); n (i.e., number of observed p values), and nmaxclusters, i.e., the number of maximal clusters for the Dirichlet process mixture representation of H 1 which determines where to truncate the stick-breaking process. A natural upper bound is given by the number of observations, however, in most cases a much smaller number will be needed. We used nmaxclusters between 10 and 20.

P VALUE CONTAMINATION 6 model { ### Likelihood for (i in 1:n) { ind[i] ~ dbern(phi) # phi is the H0 assignment rate z[i] ~ dcat(proportionsstick[]) # indicator for cluster of H1 # add one so that the indicator for the cluster of H1 # starts with 2 (1 is needed to indicate H0) z1[i] <- z[i] + 1 # if assigned to H0 z2 = 1, else if assigned to H1, # cluster z2 is equal to z1 \in [2,3,..., nmaxclusters+1] z2[i] <- ifelse(ind[i] == 1, 1, z1[i]) qp[i] ~ dnorm(mu[z2[i]], lambda[z2[i]])t(,-1.64485) } ### Posterior predictive predind ~ dbern(phi) predz ~ dcat(proportionsstick[]) predz1 <- predz + 1 predz2 <- ifelse(predind == 1, 1, predz1) predqp ~ dnorm(mu[predz2], lambda[predz2])t(,-1.64485) ### Stick-breaking process (truncated Dirichlet process) # generate proportions to break the remaining stick for (j in 1:(nMaxClusters-1)) { # proportions of the remaining stick parts proportionsremainingstick[j] ~ dbeta(1, alpha) } # Break the stick (translate the proportions to the proportions with regard # to the original stick, length original stick = 1) proportionsstick[1] <- proportionsremainingstick[1] for (j in 2:(nMaxClusters-1)) { # calculate proportions of the original stick proportionsstick[j] <- proportionsremainingstick[j] * prod(1 - proportionsremainingstick[1:(j-1)]) } sumproportionsstick <- sum(proportionsstick[1:(nmaxclusters-1)]) # make sure that proportions sum to 1 proportionsstick[nmaxclusters] <- 1 - sumproportionsstick ### Priors phi ~ dbeta(1,1)

P VALUE CONTAMINATION 7 for (j in 1:nMaxClusters) { muh1[j] ~ dnorm(0,1)t(-6,0) lambdah1[j] ~ dgamma(01,01)t(,1) } mu[1] <- 0 mu[2:(nmaxclusters+1)] <- muh1 lambda[1] <- 1 lambda[2:(nmaxclusters+1)] <- lambdah1 # Prior for the precision of the Dirichlet process, larger values lead to # more clusters, smaller values lead to less distinct clusters alpha ~ dunif(.5,7) } Simulation Results In order to test our model, we generated three data sets from independent samples t-tests each consisting of 5,000 significant p values. Each t-test p value was obtained by sampling two groups from normal distributions with standard deviations equal to one and means corresponding to a specified effect size; each group consisted of 250 participants. If a p value was not significant or the obtained effect size was in the opposite direction, it was discarded, two new groups were sampled and a new p value was obtained. In this manner, we generated one data set with 50% null hypothesis (H 0 ) contamination (i.e., 2,500 p values were generated from H 0 ), for effect sizes equal to 0.15, 0.30, and 5. For effect size 0.30 and 5, we ran three chains, 8,000 iterations each, where 2,000 iterations were cut-off as burn-in. Since the chains mixed poorly for effect size 0.15, we ran three chains for 24,000 iterations, cut-off 6,000 as burn-in, and only used every third sample (i.e., thinning = 3). This improved mixing and reduced the amount of autocorrelation, although the chains still do not look optimal. In this way, we obtained 6,000 effective samples for every of the three simulated effect sizes. The chains of the H 0 assignment rate φ for the different effect sizes are displayed in Figure 2. Figure 3 summarizes the simulation study results. Each column corresponds to the results for one data set: The first column shows the results for the data set with effect size equal to 0.15, the second column the results for effect size equal to 0.30, and the third column the results for effect size equal to 5. The first row displays the posterior distributions for the H 0 assignment rate (i.e., φ): For effect sizes of 0.30 and 5, the model adequately recovers the true H 0 contamination rate of 0.5. For effect size 0.15, the H 0 assignment rate seems to be underestimated as most posterior mass is below 0.5; however, there is relatively large uncertainty, furthermore, as mentioned previously, the chains do not look optimal.

P VALUE CONTAMINATION 8 Effect Size: 0.15 Effect Size: 0.3 Effect Size: 5 f f f Chain 1 Chain 2 Chain 3 0 1000 2000 3000 4000 5000 6000 Iteration 0 1000 2000 3000 4000 5000 6000 Iteration 0 1000 2000 3000 4000 5000 6000 Iteration Figure 2. Chains for the H 0 assignment rate φ. The left panel corresponds to effect size 0.15, the middle panel to effect size 0.30, and the right panel to effect size 5. Intuitively, it is expected that as the effect size gets smaller and smaller, it will be more difficult to estimate the H 0 assignment rate adequately, since the mixture component corresponding to H 0 and the one corresponding to H 1 become more similar, hence harder to discriminate. It is arguably undesirable that the H 0 assignment rate is not recovered adequately for a small effect size of 0.15. However, if the decision is between under- or overestimating the H 0 contamination, we believe it is better to be conservative. The second row of Figure 3 shows the H 0 assignment probabilities for the individual p values. P values originating from H 0 are colored red, p values from H 1 are colored grey. As the effect size increases, the model discriminates between p values from H 0 and H 1 more and more accurately. The third row of Figure 3 displays Q-Q plots for a comparison between the distribution of observed p values and the distribution of generated posterior predictive p values which allow to obtain an impression about how well the model can account for the observed data patterns. If the distributions were exactly the same, the Q-Q plots would be straight lines with slopes equal to one. The Q-Q plots suggest that the model is able to accurately describe all three data sets.

P VALUE CONTAMINATION 9 Effect Size: 0.15 Effect Size: 0.3 Effect Size: 5 5 30 40 Posterior Density 4 3 2 1 Posterior Density 25 20 15 10 5 Posterior Density 30 20 10 0 H0 Assignment Rate 0 H0 Assignment Rate 0 H0 Assignment Rate Probability H0 Assignment 0 1 2 3 4 5 Significant P Values Probability H0 Assignment 0 1 2 3 4 5 Significant P Values Probability H0 Assignment 0 1 2 3 4 5 Significant P Values Predicted Quantiles 5 4 3 2 1 0 0 1 2 3 4 5 Observed P Value Quantiles Predicted Quantiles 5 4 3 2 1 0 0 1 2 3 4 5 Observed P Value Quantiles Predicted Quantiles 5 4 3 2 1 0 0 1 2 3 4 5 Observed P Value Quantiles Figure 3. Simulation results. The first column corresponds to a set of 5,000 p values generated from independent samples t-tests with effect size equal to 0.15, the second column to effect size 0.30, and the third column to effect size 5. All data sets were generated with a 0.5 H 0 contamination rate. The first row depicts the posterior distributions for the H 0 assignment rate, the second row the individual H 0 assignment probabilities (p values from H 0 are colored red, those from H 1 are colored grey), and the third row displays Q-Q plots for a comparison between observed and generated posterior predictive p values.

P VALUE CONTAMINATION 10 References Antoniak, C. E. (1974). Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. The annals of statistics, 2, 1152 1174. Blackwell, D., & MacQueen, J. B. (1973). Ferguson distributions via pólya urn schemes. The Annals of Statistics, 1, 353 355. Congdon, P. (2006). Bayesian statistical modelling (2nd ed.). Chichester, UK: Wiley. Lunn, D., Jackson, C., Best, N., Thomas, A., & Spiegelhalter, D. (2012). The BUGS book: A practical introduction to Bayesian analysis. Boca Raton (FL): Chapman & Hall/CRC. Müller, P., Quintana, F. A., Jara, A., & Hanson, T. (2015). Bayesian nonparametric data analysis. Cham, Switzerland: Springer. Navarro, D. J., Griffiths, T. L., Steyvers, M., & Lee, M. D. (2006). Modeling individual differences using Dirichlet processes. Journal of Mathematical Psychology, 50, 101 122. Ohlssen, D. I., Sharples, L. D., & Spiegelhalter, D. J. (2007). Flexible random-effects models using Bayesian semi-parametric models: applications to institutional comparisons. Statistics in medicine, 26, 2088 2112. Plummer, M. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In K. Hornik, F. Leisch, & A. Zeileis (Eds.), Proceedings of the 3rd international workshop on distributed statistical computing. Vienna, Austria. Sethuraman, J. (1994). A constructive definition of Dirichlet priors. Statistica Sinica, 4, 639 650. Spiegelhalter, D. J., Thomas, A., Best, N., & Lunn, D. (2003). WinBUGS version 1.4 user manual. Cambridge, UK: Medical Research Council Biostatistics Unit.