Biol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference

Similar documents
Biol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference Fall 2016

Bayesian Phylogenetics:

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Biol 206/306 Advanced Biostatistics Lab 11 Models of Trait Evolution Fall 2016

Bayesian Regression Linear and Logistic Regression

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Bayesian phylogenetics. the one true tree? Bayesian phylogenetics

Doing Bayesian Integrals

Bayesian Inference. Anders Gorm Pedersen. Molecular Evolution Group Center for Biological Sequence Analysis Technical University of Denmark (DTU)

Bayesian Methods for Machine Learning

Phylogenetics: Bayesian Phylogenetic Analysis. COMP Spring 2015 Luay Nakhleh, Rice University

Principles of Bayesian Inference

One-minute responses. Nice class{no complaints. Your explanations of ML were very clear. The phylogenetics portion made more sense to me today.

Molecular Evolution & Phylogenetics

Bayesian Inference using Markov Chain Monte Carlo in Phylogenetic Studies

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

Introduction to Bayesian Statistics 1

(Directions for Excel Mac: 2011) Most of the global average warming over the past 50 years is very likely due to anthropogenic GHG increases

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Lecture 5: Bayes pt. 1

STA 4273H: Statistical Machine Learning

A Bayesian Approach to Phylogenetics

STA 4273H: Statistical Machine Learning

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

Machine Learning for Data Science (CS4786) Lecture 24

Markov Chain Monte Carlo (MCMC) and Model Evaluation. August 15, 2017

Principles of Bayesian Inference

Principles of Bayesian Inference

Bayesian Phylogenetics

36-463/663Multilevel and Hierarchical Models

"PRINCIPLES OF PHYLOGENETICS: ECOLOGY AND EVOLUTION" Integrative Biology 200 Spring 2018 University of California, Berkeley

STA 4273H: Sta-s-cal Machine Learning

Who was Bayes? Bayesian Phylogenetics. What is Bayes Theorem?

An introduction to Bayesian reasoning in particle physics

PIER HLM Course July 30, 2011 Howard Seltman. Discussion Guide for Bayes and BUGS

Bayesian Learning Extension

An Introduction to Path Analysis

Introduction to Bayesian Learning. Machine Learning Fall 2018

Bayesian Networks in Educational Assessment

Package LBLGXE. R topics documented: July 20, Type Package

(1) Introduction to Bayesian statistics

STA 4273H: Statistical Machine Learning

Related Concepts: Lecture 9 SEM, Statistical Modeling, AI, and Data Mining. I. Terminology of SEM

An Introduction to Mplus and Path Analysis

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Probabilities, Uncertainties & Units Used to Quantify Climate Change

BUGS Bayesian inference Using Gibbs Sampling

Markov Chain Monte Carlo, Numerical Integration

Pattern Recognition and Machine Learning

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Markov chain Monte Carlo

Supplementary Note on Bayesian analysis

Structure Learning: the good, the bad, the ugly

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Readings: K&F: 16.3, 16.4, Graphical Models Carlos Guestrin Carnegie Mellon University October 6 th, 2008

Bayesian Networks BY: MOHAMAD ALSABBAGH

STA 4273H: Statistical Machine Learning

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Nested Sampling. Brendon J. Brewer. brewer/ Department of Statistics The University of Auckland

Bayesian Linear Regression [DRAFT - In Progress]

Univariate Normal Distribution; GLM with the Univariate Normal; Least Squares Estimation

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Using phylogenetics to estimate species divergence times... Basics and basic issues for Bayesian inference of divergence times (plus some digression)

Bayesian Inference and MCMC

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

STA 414/2104: Machine Learning

Bayesian Regression (1/31/13)

Use of Eigen values and eigen vectors to calculate higher transition probabilities

MCMC notes by Mark Holder

MCMC Methods: Gibbs and Metropolis

Lecture 6: Markov Chain Monte Carlo

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Non-Parametric Bayes

MCMC: Markov Chain Monte Carlo

Choosing priors Class 15, Jeremy Orloff and Jonathan Bloom

CPSC 540: Machine Learning

36-463/663: Hierarchical Linear Models

Bayesian Statistics: An Introduction

Infer relationships among three species: Outgroup:

Some of these slides have been borrowed from Dr. Paul Lewis, Dr. Joe Felsenstein. Thanks!

Assessing Regime Uncertainty Through Reversible Jump McMC

ECE521 W17 Tutorial 6. Min Bai and Yuhuai (Tony) Wu

The lmm Package. May 9, Description Some improved procedures for linear mixed models

2011 Arizona State University Page 1 of 6

Extensive evidence indicates that life on Earth began more than 3 billion years ago.

Monte Carlo in Bayesian Statistics

Markov Chain Monte Carlo methods

Answers and expectations

Package lmm. R topics documented: March 19, Version 0.4. Date Title Linear mixed models. Author Joseph L. Schafer

New Statistical Methods That Improve on MLE and GLM Including for Reserve Modeling GARY G VENTER

The Ising model and Markov chain Monte Carlo

MCMC and Gibbs Sampling. Kayhan Batmanghelich

Using Microsoft Excel

Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters

Machine Learning

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

The Metropolis-Hastings Algorithm. June 8, 2012

Session 3A: Markov chain Monte Carlo (MCMC)

Transcription:

Biol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference By Philip J. Bergmann 0. Laboratory Objectives 1. Learn what Bayes Theorem and Bayesian Inference are 2. Reinforce the properties of Bayesian Inference using a simple example 3. Learn what the Markov Chain Monte Carlo approach entails 4. Learn to obtain a posterior probability distribution from MCMC 5. Learn to test an hypothesis using MCMC output 1. Bayes Theorem and Bayesian Inference Bayesian Inference (BI) is a statistical approach that is founded on Bayes Theorem. Bayes Theorem is a probabilistic formula that allows one to calculate a posterior probability distribution, given a dataset, a model or hypothesis, and a prior probability distribution. Bayes Theorem can be expressed as: = In Bayes Theorem, the approximation of P(Data H A ) is actually a familiar quantity, in that it is the likelihood of your data given a model this is something that you have obtained from analyses that you have done in previous labs. The term P(Data) is considered a scaling term that is important to making the mathematics work out, and will not be considered further today. The term P(H A ) is the prior probability distribution, often simply referred to as the prior. This is a distribution that characterizes our understanding of the phenomenon being modeled before we look at our data. The prior can be obtained from previous studies published in the literature, a pilot study, a guess, or it can be uninformative. Each parameter in a model will have a prior. For example, if a parameter of interest can range from zero to one, and we have a pretty good idea that the parameter s value is >0.5, we might use a prior probability distribution that gives high probabilities for values 0.5-1, but low probabilities below 0.5 (for example, a normal distribution with mean 0.8 and variance 0.2, expressed as N(0.8,0.2)). If we have no idea what parameter value is most likely, we might use a prior that is uniform between zero and one. Finally, the term P(H A Data) is the posterior probability distribution, also simply called the posterior. It takes both our data (the likelihood of our data, given the model) and the prior into consideration. BI is attractive because it offers two main advantages over frequentist approaches. First, the posterior represents the probability of an hypothesis/model, given the data, and this is what one ultimately would like to estimate, not the probability of the data given the hypothesis. Second, BI allows for the incorporation of prior knowledge into a study. Since science works by building on previous work, a formal, quantitative framework to do this is attractive. However, when using BI, one must also be careful to interpret the results correctly. Specifically, the posterior Biol 203/306 2012 Page 1

represents the probability of our belief in a certain hypothesis, and not the relative frequency of events. In addition, parameters (including hypotheses, likelihoods, priors, and posteriors) are viewed as random variables that are sampled from a distribution, not as fixed values that we are trying to estimate. 2. A Simple Example Using Bayesian Inference We will use a simple example to get more comfortable with Bayesian Inference. There is no dataset for this example, but it will show how BI can be used. The general format of this exercise is modified from Shahbaba (2011. Biostatistics with R. Springer: New York). In this example, you are studying how a trematode parasite influences reproduction in female stickleback fish. Since you discovered this new parasite of stickleback, you know nothing of how it influences reproduction of the fish. You collect some very basic data in the lab, counting how many females successfully lay eggs when parasitized and when not. Your results are as follow: Parasitized Not Parasitized # of Females that Lay Eggs 16 20 Total # of Females 20 20 You are interested in using the data in the table above to find the posterior probability distribution of a female reproducing if she is parasitized. From the information given, what would you choose the prior probability distribution to be? Simply describing its shape is okay. Population proportions (proportion of individuals doing something) can be modeled as a beta distribution. This is convenient because it also makes the calculation of a posterior from some data and a prior computationally easy (note that most models and applications of BI actually require extremely intensive math). The beta distribution ranges from 0 to 1, and has two parameters that define its shape, called shape1 and shape2. The mean of a beta distribution can be calculated as: h 1 = h 1+ h 2 You can also plot the shape of this distribution using the beta density function, dbeta, in R: > plot(seq(0,1,0.01),dbeta(seq(0,1,0.01),shape1,shape2)) In the dbeta function, the first argument is a vector that lists the values for which to calculate the beta density, and the second and third arguments are simple the two shape parameters. Try this for the following values of the parameters. Describe each distribution. Shape1 Shape2 Description 2 8 1 1 4 4 8 2 Biol 203/306 2012 Page 2

Which of these options would you use for the prior probability distribution in the current analysis? As mentioned above, the posterior can be easily calculated from the prior if you are happy modeling each with a beta distribution. You can calculate the posterior population proportion (proportion of females reproducing) as follows: = +, + In this equation, Beta(α,β) is the prior probability distribution, y is the number of individuals undergoing the event of interest (i.e., laying eggs), and n is the total number of individuals studied. Calculate the posterior beta distribution. What are its two shape parameters? For the sequence of beta values given above (in your plots), save beta distribution for your prior as object prior1, and for your posterior as post1. Assemble the sequence of beta values, your prior densities and your posterior densities into a data frame. Plot the prior and posterior on the same plot against beta value and provide the plot here: What is the mean of your posterior? How does it compare to just calculating the proportion of females that laid eggs in the study? Why are the two values different? As you may have noticed, using 20 individuals may not result in the most accurate estimates of how the trematode affects female reproduction. You repeat your study with more female sticklebacks. You also do the study in two lakes. Lake A is the same as was used in your pilot study, Lake B is new. This time you use 80 females that you infect and 80 as controls from each lake. Again, all the controls from both lakes lay eggs. Of the 80 infected females, 67 lay eggs from Lake A, and only 36 lay eggs from Lake B. Repeat the analysis for each lake, but use the posterior from your pilot study as the prior for both of these new analyses. Calculate the posteriors, expressing each as Beta(shape1,shape2): Biol 203/306 2012 Page 3

Also save the density functions for each of your posteriors along your sequence of beta values, as objects posta and postb. Plot your new prior and both posteriors against the beta values, and paste the plot below. 3. The Markov Chain Monte Carlo Approach to BI As mentioned above, for virtually all applications of BI, the mathematics are very daunting. In fact, it was not until the 1990s when scientists discovered how to get around the mathematical hurdles and estimate a posterior probability distribution. Shortly after, BI began to be applied in a variety of applications. In all of these applications, a Markov Chain Monte Carlo (MCMC) approach is used. An MCMC run starts at a random starting point in parameter spaces and follows a directed random walk through parameter space until likelihood is maximized. Once a MCMC arrives at near the maximal likelihood, it wanders in its vicinity, visiting various relatively likely values of parameters. It visits each parameter value in direct proportion to that value s posterior probability, and so a large sample of these visits produces a posterior probability distribution for each parameter and the likelihood. This is explained an illustrated in lecture. MCMC analyses often run for millions or billions of iterations, and so can take hours, days, or even months to complete, and this is certainly a limitation of the approach. (In the estimation of phylogenetic trees, where any analysis can take a long time, BI actually provides a computational savings because it simultaneously estimates clade support values.) MCMC analyses can be done in R, but due to time limitations, we will not be doing the actual analyses. Instead, we will examine the output of some MCMC analyses that are already done and test hypotheses of trait evolution. Each of the MCMC analyses that you will use took approximately eight hours to run on a new laptop. The dataset that was analyzed consists of data for eighteen species of phrynosomatine lizards and a phylogenetic tree (Bergmann et al. 2009. Directional evolution of stockiness in lizards coevolves with ecology and locomotion in lizards. Evolution 63: 215-227). Three variables were analyzed: PC2, which was a measure of stockiness of the body; maximal relative horn length, which was used as a measure of the degree to which a species invested in anti-predator defenses; and relative velocity, which is the maximal speed a lizard can run in body lengths per second. The goal was to test whether species that were stockier ran less quickly, and if slower lizards had larger horns (needing to be better defended). We will test for trait co-evolution by using one MCMC analysis that uses a BM model of trait evolution for each trait, and one MCMC analysis that uses a BM model, plus has a parameter for the covariance between two traits. By comparing these models, we test whether two traits have evolved in a correlated manner. What alternative approach could you use to test these hypotheses? Biol 203/306 2012 Page 4

Download the Excel spreadsheet for this lab and open it. The spreadsheet contains four worksheets, one for each MCMC analysis. Two contain analyses for PC-2 and relative velocity, and two contain analyses for relative horn length and relative velocity. One of each pair assumes no correlation among traits, and one provides an extra parameter for covariance. Looking at the output, you will see that the first column contains the iteration number of each sampled iteration. The tree column contains which phylogenetic tree was used for each iteration. Since we had only one phylogeny, this is always the same. If you had multiple phylogenies that spanned the range of supported phylogenies, you could incorporate phylogenetic uncertainty into the analysis. People sometimes will use a posterior of phylogenetic trees to do this. The next column is the log-likelihood of the model with the parameter values for that iteration. The HMean column is the harmonic mean of log-likelihoods for the sampled parameter values up to the current iteration. The alpha parameter for each trait is the ancestral reconstruction for the base of the tree. The variance for each trait is the σ 2 parameter the rate of evolution. Finally, the Covar column is the covariance between the two traits. If it is zero in all iterations, then the model does not include a covariance parameter and the traits are assumed to follow independent BM models of evolution. How many iterations was each MCMC analysis run for? The large number of iterations makes it necessary to sample from the MCMC run only once in a while because each iteration would otherwise take up one line in the spreadsheet, resulting in an excessively long spreadsheet. How frequently were the MCMC runs you have at your disposal sampled? As an MCMC run moves from its random starting position to the likelihood maximum, it is not sampling from the posterior probability distribution. This happens only once it has arrived at the posterior. The part of the MCMC where it is traveling towards the optimal area must be discarded from consideration, and is called burn-in. Each parameter value, and the likelihood will undergo a burn-in period and the longest burn-in should be used to determine what to discard. Only when the MCMC is stationary, can you study its posterior distributions. You can estimate the burn-in period by simply plotting log-likelihood or a parameter value against iteration number. The harmonic mean of the log-likelihood is provided because it gives the most rigorous estimate of the burn-in period and it will be used later to compare models. In Excel, plot the log-likelihood, the harmonic mean of log-likelihood, and each of the parameter values against iteration number. What is the approximate length of the burn-in period in each case? Fill in your findings in the table. Measure PC2-rVel no correl PC2-rVel correl Horn-rVel no correl Horn-rVel correl log(l) H. Mean (L) Alpha 1 Alpha 2 Variance 1 Variance 2 Covariance N/A N/A Biol 203/306 2012 Page 5

For one of the MCMC analyses, provide a plot of log-likelihood and its harmonic mean against iteration number. Identify the burn-in period that you would use for that analysis. Using Excel to make your graph is fine, no need to use R. Now that burn-in is identified for each analysis, you can calculate mean and standard deviation for the log-likelihood and parameters. For mean log-likelihood, you can simply take the harmonic mean from the last iteration of the analysis. Calculate its standard deviation in the usual way from the likelihood column. Delete the rows in each dataset that constitute the burnin period, and then calculate mean (=AVERAGE()), and standard deviation (=STDEV()) for each quantity. The fill down capability in Excel will also be useful. PC2-rVel no correl PC2-rVel correl Horn-rVel no correl Horn-rVel correl Measure Mean SD Mean SD Mean SD Mean SD Likelihood Alpha 1 Alpha 2 Variance 1 Variance 2 Covariance N/A N/A N/A N/A The next step in the analysis is to compare models. Similar to the labs using a model-based approach, you have fitted two models to each pair of traits with the intent of finding the better model. What are the two models that you fit to each pair of traits? You could calculate AIC values to compare models, but a Bayesian Inference approach suggests that you should use a Bayesian approach to comparing models as well. A Bayesian Information Criterion (BIC) exists and can be used for this purpose. Another approach is to use Bayes Factors (BF). BFs compare two models, relay on a quantity called a marginal likelihood, and can be interpreted in a similar way to an AIC value (BF<2 indicates equivocal support for one model over another, BF>10 indicates very strong support for one model over another). The marginal likelihood is estimated with the harmonic mean of the log-likelihood. Bayes Factors can be calculated as follows: =2 log log Biol 203/306 2012 Page 6

Since the output already provides log-likelihoods, you can calculate BF simply by taking the difference in harmonic means of log-likelihood for the two models and multiply by two. Do this for both pairs of traits. For each pair of traits, provide the BF, and give your conclusions about which model is preferred and how strongly it is supported. What are your biological conclusions about the evolution of the traits? For the best model for each pair of traits, which trait is evolving faster (in the tables above, the traits are listed in order they appear in the output)? Explain your answer. If the correlated evolution model is better supported for either pair of traits, calculate the evolutionary correlation between the traits so that it is more easily interpretable. A correlation can be calculated from a covariance and standard deviations for two traits as: =, Biol 203/306 2012 Page 7