Bayesian Phylogenetics

Similar documents
Some of these slides have been borrowed from Dr. Paul Lewis, Dr. Joe Felsenstein. Thanks!

A Bayesian Approach to Phylogenetics

Bayesian inference & Markov chain Monte Carlo. Note 1: Many slides for this lecture were kindly provided by Paul Lewis and Mark Holder

Bayesian Phylogenetics

An Introduction to Bayesian Phylogenetics

Bayesian Inference using Markov Chain Monte Carlo in Phylogenetic Studies

Infer relationships among three species: Outgroup:

How should we go about modeling this? Model parameters? Time Substitution rate Can we observe time or subst. rate? What can we observe?

Phylogenetics: Bayesian Phylogenetic Analysis. COMP Spring 2015 Luay Nakhleh, Rice University

Bayesian Methods for Machine Learning

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

Bayesian inference & Markov chain Monte Carlo. Note 1: Many slides for this lecture were kindly provided by Paul Lewis and Mark Holder

Who was Bayes? Bayesian Phylogenetics. What is Bayes Theorem?

Molecular Evolution & Phylogenetics

Bayesian Phylogenetics

Biol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference

One-minute responses. Nice class{no complaints. Your explanations of ML were very clear. The phylogenetics portion made more sense to me today.

Bayesian phylogenetics. the one true tree? Bayesian phylogenetics

Some of these slides have been borrowed from Dr. Paul Lewis, Dr. Joe Felsenstein. Thanks!

Biol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference Fall 2016

CPSC 540: Machine Learning

Reminder of some Markov Chain properties:

Using phylogenetics to estimate species divergence times... Basics and basic issues for Bayesian inference of divergence times (plus some digression)

Markov Chain Monte Carlo methods

Lecture 6: Model Checking and Selection

Advanced Statistical Modelling

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

QTL model selection: key players

Bayesian Phylogenetics:

Bayesian Inference. Anders Gorm Pedersen. Molecular Evolution Group Center for Biological Sequence Analysis Technical University of Denmark (DTU)

Computational statistics

Monte Carlo in Bayesian Statistics

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

STA 4273H: Statistical Machine Learning

The Metropolis-Hastings Algorithm. June 8, 2012

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Bayesian Models in Machine Learning

Markov Chain Monte Carlo, Numerical Integration

Some of these slides have been borrowed from Dr. Paul Lewis, Dr. Joe Felsenstein. Thanks!

Bayesian Inference and MCMC

Theory of Stochastic Processes 8. Markov chain Monte Carlo

CS Homework 3. October 15, 2009

Markov Chain Monte Carlo methods

Principles of Bayesian Inference

The Ising model and Markov chain Monte Carlo

Principles of Bayesian Inference

Lecture 12: Bayesian phylogenetics and Markov chain Monte Carlo Will Freyman

1 Probabilities. 1.1 Basics 1 PROBABILITIES

CS 343: Artificial Intelligence

Bayesian Networks in Educational Assessment

Stat 516, Homework 1

Lab 9: Maximum Likelihood and Modeltest

Markov Chain Monte Carlo (MCMC)

1 Probabilities. 1.1 Basics 1 PROBABILITIES

MCMC: Markov Chain Monte Carlo

STA 4273H: Statistical Machine Learning

STA 294: Stochastic Processes & Bayesian Nonparametrics

Non-Parametric Bayes

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Principles of Bayesian Inference

Approximate Bayesian Computation: a simulation based approach to inference

INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP

Markov Chain Monte Carlo

Metropolis-Hastings Algorithm

MCMC notes by Mark Holder

Bayes Nets: Sampling

Results: MCMC Dancers, q=10, n=500

Approximate Inference

Bayesian Learning (II)

Transdimensional Markov Chain Monte Carlo Methods. Jesse Kolb, Vedran Lekić (Univ. of MD) Supervisor: Kris Innanen

Chris Fraley and Daniel Percival. August 22, 2008, revised May 14, 2010

Phylogenetic Inference using RevBayes

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

Lecture 6: Markov Chain Monte Carlo

Bayesian Inference in Astronomy & Astrophysics A Short Course

Penalized Loss functions for Bayesian Model Choice

Statistical Data Analysis Stat 3: p-values, parameter estimation

Answers and expectations

Sampling Rejection Sampling Importance Sampling Markov Chain Monte Carlo. Sampling Methods. Oliver Schulte - CMPT 419/726. Bishop PRML Ch.

An Introduction to Bayesian Inference of Phylogeny

Bayesian Methods in Multilevel Regression

Estimating Evolutionary Trees. Phylogenetic Methods

Discrete & continuous characters: The threshold model

Lecture : Probabilistic Machine Learning

Markov chain Monte Carlo Lecture 9

Markov Networks.

Bayesian Model Diagnostics and Checking

The Idiot s Guide to the Zen of Likelihood in a Nutshell in Seven Days for Dummies, Unleashed

Sampling Algorithms for Probabilistic Graphical models

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

MCMC Review. MCMC Review. Gibbs Sampling. MCMC Review

Down by the Bayes, where the Watermelons Grow

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Particle-Based Approximate Inference on Graphical Model

MCMC and Gibbs Sampling. Kayhan Batmanghelich

Using Bayes Factors for Model Selection in High-Energy Astrophysics

STA 4273H: Sta-s-cal Machine Learning

Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

Introduc)on to Bayesian Methods

Bayesian philosophy Bayesian computation Bayesian software. Bayesian Statistics. Petter Mostad. Chalmers. April 6, 2017

Transcription:

Bayesian Phylogenetics Paul O. Lewis Department of Ecology & Evolutionary Biology University of Connecticut Woods Hole Molecular Evolution Workshop, July 27, 2006 2006 Paul O. Lewis Bayesian Phylogenetics 1 1

An Introduction to Bayesian Phylogenetics Bayesian inference in general Markov chain Monte Carlo Bayesian phylogenetics Prior distributions Bayesian model selection 2006 Paul O. Lewis Bayesian Phylogenetics 2 2

I. Bayesian inference in general 2006 Paul O. Lewis Bayesian Phylogenetics 3 3

Joint probabilities marbles1.ai 2006 Paul O. Lewis Bayesian Phylogenetics 4 4

Conditional probabilities marbles2.ai 2006 Paul O. Lewis Bayesian Phylogenetics 5 5

Bayes rule marbles3.ai 2006 Paul O. Lewis Bayesian Phylogenetics 6 6

Probability of "Dotted" 2006 Paul O. Lewis Bayesian Phylogenetics 7 7

Bayes' rule (cont.) Pr(D) is the marginal probability of being dotted To compute it, we marginalize over colors 2006 Paul O. Lewis Bayesian Phylogenetics 8 8

Joint probabilities D B Pr(D,B) W Pr(D,W) S Pr(S,B) Pr(S,W) 2006 Paul O. Lewis Bayesian Phylogenetics 9 9

Marginalizing over colors S D B W Marginal probability of being a dotted marble Pr(D,B) Pr(D,W) Pr(S,W) Pr(S,B) 2006 Paul O. Lewis Bayesian Phylogenetics 10 10

Marginalizing over "dottedness" B W D S Pr(D,B) Pr(S,B) Pr(D,W) Pr(S,W) Marginal probability of being a white marble 2006 Paul O. Lewis Bayesian Phylogenetics 11 11

Bayes' rule (cont.) 2006 Paul O. Lewis Bayesian Phylogenetics 12 12

Bayes rule in Statistics Likelihood of hypothesis θ Prior probability of hypothesis θ ( θ D) Pr ( D θ) ( θ) ( D θ ) ( θ ) Pr Pr = Pr Pr θ Posterior probability of hypothesis θ Marginal probability of the data 2006 Paul O. Lewis Bayesian Phylogenetics 13 13

Simple (albeit silly) paternity example θ 1 and θ 2 are assumed to be the only possible fathers, child has genotype Aa, mother has genotype aa, so child must have received allele A from the true father. Note: the data in this case is the child s genotype (Aa) Possibilities Genotypes θ 1 AA θ 2 Aa Row sum --- Prior 1/2 1/2 1 Likelihood 1 1/2 --- Prior X Likelihood 1/2 1/4 3/4 Posterior 2/3 1/3 1 2006 Paul O. Lewis Bayesian Phylogenetics 14 14

Bayes' rule in Statistics D refers to the "observables" (i.e. the Data) θ refers to one or more "unobservables" (i.e. parameters of the model): a tree model (i.e. tree topology) a substitution model (e.g. JC, F84, GTR, etc.) a parameter of a substitution model (e.g. a branch length, a base frequency, transition/transversion rate ratio, etc.) 2006 Paul O. Lewis Bayesian Phylogenetics 15 15

Bayes rule: continuous case Likelihood Prior probability density f ( θ D) = θ ( θ) ( θ) ( ) ( ) f D f f D θ f θ dθ Posterior probability density Marginal probability of the data 2006 Paul O. Lewis Bayesian Phylogenetics 16 16

Probabilities and probability densities Probability density function density.ai 2006 Paul O. Lewis Bayesian Phylogenetics 17 17

Integration of densities Area under the density curve 2006 Paul O. Lewis Bayesian Phylogenetics density.ai 18 18

Coin-flipping example D: 6 heads (out of 10 flips) θ = true underlying probability of heads on any given flip if θ = 0.5, coin is perfectly fair if θ = 1.0, coin always comes up heads (e.g. it is a trick coin with heads on both sides) Likelihood: 2006 Paul O. Lewis Bayesian Phylogenetics 19 19

Density vs. probability D = data: 6 heads (out of 10 flips) posterior density uniform prior density f ( θ D) = θ ( θ) ( θ) ( ) ( ) f D f f D θ f θ dθ = posterior probability θ uniformpriorandposterior.ai 2006 Paul O. Lewis Bayesian Phylogenetics 20 20

Beta prior gives more flexibility posterior density Beta(2,2) prior density f ( θ D) θ ( θ) ( θ) ( ) ( ) f D f = f D θ f θ dθ Posterior probability that θ lies between 0.45 and 0.55 is 0.223 betapriorandposterior.ai 2006 Paul O. Lewis Bayesian Phylogenetics 21 21

Bayesian Coin Flipper Windows program download from: http://hydrodictyon.eeb.uconn.edu/people/plewis/ Excel spreadsheet version also available 2006 Paul O. Lewis Bayesian Phylogenetics 22 22

Usually there are many parameters... Likelihood Prior probability density Posterior probability density Marginal probability of the data 2006 Paul O. Lewis Bayesian Phylogenetics 23 23

II. Markov chain Monte Carlo 2006 Paul O. Lewis Bayesian Phylogenetics 24 24

MCMC robot s rules Slightly downhill steps are usually accepted Drastic off the cliff downhill steps are almost never accepted With these rules, it is easy to see that the robot tends to stay near the tops of hills Uphill steps are always accepted mcmcrobot.ai 2006 Paul O. Lewis Bayesian Phylogenetics 25 25

10 (Actual) MCMC robot rules 8 6 4 2 Slightly downhill steps are usually accepted because R is near 1 Currently at 6.20 m Proposed at 5.58 m R = 5.58/6.20 = 0.90 Currently at 1.0 m Proposed at 2.3 m R = 2.3/1.0 = 2.3 Drastic off the cliff downhill steps are almost never accepted because R is near 0 Currently at 6.20 m Proposed at 0.31 m R = 0.31/6.20 = 0.05 0 Uphill steps are always accepted because R > 1 The robot takes a step if it draws a random number (uniform on 0.0 to 1.0), and that number is less than or equal to R mcmcrobot.ai 2006 Paul O. Lewis Bayesian Phylogenetics 26 26

A Thing Of Beauty When calculating the ratio R of posterior densities, the marginal probability of the data cancels. 2006 Paul O. Lewis Bayesian Phylogenetics 27 27

Target vs. proposal distributions The target distribution is the posterior distribution of interest The proposal distribution is used to decide where to go next; you have much flexibility here, and the choice affects only the efficiency of the MCMC algorithm 2006 Paul O. Lewis Bayesian Phylogenetics 28 28

MCRobot Windows program download from: http://hydrodictyon.eeb.uconn.edu/people/plewis/ 2006 Paul O. Lewis Bayesian Phylogenetics 29 29

The Hastings ratio If robot has a greater tendency to propose steps to the right as opposed to the left when choosing its next step, then the acceptance ratio must counteract this tendency. Suppose the probability of proposing a spot to the right is 2/3 (making the probability of going left 1/3) In this case, the Hastings ratio would decrease the chance of accepting moves to the right by half, and double the chance of accepting moves to the left 2006 Paul O. Lewis Bayesian Phylogenetics 30 30

Metropolis-coupled Markov chain Monte Carlo (MCMCMC, or MC 3 ) MC 3 involves running several chains simultaneously The cold chain is the one that counts, the rest are heated chains Chain is heated by raising densities to a power less than 1.0 (values closer to 0.0 are warmer) Geyer, C. J. 1991. Markov chain Monte Carlo maximum likelihood for dependent data. Pages 156-163 in Computing Science and Statistics (E. Keramidas, ed.). 2006 Paul O. Lewis Bayesian Phylogenetics 31 31

Heated chains act as scouts for the cold chain stateswap.ai 2006 Paul O. Lewis Bayesian Phylogenetics 32 32

III. Bayesian phylogenetics 2006 Paul O. Lewis Bayesian Phylogenetics 33 33

So, what s all this got to do with phylogenetics? Imagine drawing tree topologies randomly from a bin in which the number of copies of any given topology is proportional to the (marginal) posterior probability of that topology. Approximating the posterior of any particular attribute of tree topologies (e.g. existence of group AC in this case) is simply a matter of counting. ACcladeposterior.ai 2006 Paul O. Lewis Bayesian Phylogenetics 34 34

Moving through treespace The Larget-Simon* move *Larget, B., and D. L. Simon. 1999. Markov chain monte carlo algorithms for the Bayesian analysis of phylogenetic trees. Molecular Biology and Evolution 16: 750-759. See also: Holder et al. 2005. Syst. Biol. 54: 961-965. lsmove.ai 2006 Paul O. Lewis Bayesian Phylogenetics 35 35

Moving through parameter space Using κ (ratio of the transition rate to the transversion rate) as an example of a model parameter. Proposal distribution is uniform from κ-δ to κ+δ The step size of the MCMC robot is defined by δ: a larger δ means that the robot will attempt to make larger jumps on average. kappamove.ai 2006 Paul O. Lewis Bayesian Phylogenetics 36 36

Putting it all together Start with random tree and arbitrary initial values for branch lengths and model parameters Each generation consists of one of these (chosen at random): Propose a new tree (e.g. Larget-Simon move) and either accept or reject the move Propose (and either accept or reject) a new model parameter value Every k generations, save tree topology, branch lengths and all model parameters (i.e. sample the chain) After n generations, summarize sample using histograms, means, credible intervals, etc. 2006 Paul O. Lewis Bayesian Phylogenetics 37 37

Posteriors of model parameters 70 60 lower = 2.907 upper = 3.604 95% credible interval Histogram created from a sample of 1000 κ values. 50 40 30 20 10 0 2.790 2.837 2.883 2.930 2.976 3.023 3.069 3.116 3.162 3.209 3.255 3.302 3.348 3.395 3.441 3.488 3.534 3.581 3.627 3.674 3.720 From: Lewis, L., and Flechtner, V. 2002. Taxon 51: 443-451. mean = 3.234 lewisflectner.xls 2006 Paul O. Lewis Bayesian Phylogenetics 38 38

IV. Prior distributions 2006 Paul O. Lewis Bayesian Phylogenetics 39 39

Common Prior Distributions For topologies: discrete Uniform distribution For proportions: Beta(a,b) distribution flat when a=b peaked above 0.5 if a=b and both are greater than 1 For base frequencies: Dirichlet(a,b,c,d) distribution flat when a=b=c=d all base frequencies close to 0.25 if a=b=c=d and large (e.g. 300) For GTR model relative rates: Dirichlet(a,b,c,d,e,f) distribution 2006 Paul O. Lewis Bayesian Phylogenetics 40 40

4-parameter Dirichlet(a,b,c,d) Flat prior: a = b = c = d = 1 Informative prior: a = b = c = d = 300 (stereo pairs) (Thanks to Mark Holder for pointing out to me that a tetrahedron could be used for plotting a 4-dimensional Dirichlet) dirichlet1.stereo.ai, dirichlet300.stereo.ai 2006 Paul O. Lewis Bayesian Phylogenetics 41 41

Common Priors (cont.) For other model parameters and branch lengths: Gamma(a,b) distribution Exponential(λ) equals Gamma(1, λ -1 ) distribution Mean of Gamma(a,b) is a b mean of an Exponential(10) distribution is 0.1 Variance of a Gamma(a,b) distribution is a b 2 variance of an Exponential(10) distribution is 0.01 Note: be aware that in many papers the Gamma distribution is defined such that the second (scale) parameter is the inverse of the value b used in this slide! In this case, the mean and variance would be a/b and a/b 2, respectively. 2006 Paul O. Lewis Bayesian Phylogenetics 42 42

Priors for model parameters with no upper bound Exponential(2) = Gamma(1,½) 2 1.5 1 0.5 0 0 2 4 6 8 10 Exponential(0.1) = Gamma(1,10) 2 1.5 1 0.5 0 0 2 4 6 8 10 2 1.5 1 0.5 Gamma(2,1) 2 1.5 1 0.5 Uniform(0,2) 0 0 2 4 6 8 10 0 0 2 4 6 8 10 See chapter 18 in Felsenstein, J. (2004. Inferring Phylogenies.Sinauer) before using. 2006 Paul O. Lewis Bayesian Phylogenetics 43 zerotoinfinitypriors.xls 43

Learning about priors Suppose you want to assume an Exponential distribution with mean 0.1 for the shape parameter of the discrete gamma distribution of among site rate heterogeneity. You use the command help prset in MrBayes (version 3.1.1) to find out how to do this, and this is what MrBayes says: Shapepr -- This parameter specifies the prior for the gamma shape parameter for among-site rate variation. The options are: You type prset shapepr = uniform(<number>,<number>) prset shapepr = exponential(<number>) prset shapepr = fixed(<number>) prset shapepr=exponential(10.0); but is mean of the prior going to be 10 or 0.1? There is a way to find out... 2006 Paul O. Lewis Bayesian Phylogenetics 44 44

10 9 8 7 6 5 4 3 2 1 0 Running on empty Prior actually turns out to be Exponential(10) that is truncated below 0.05! Note! The truncation does not occur in later versions (e.g. 3.1.2) of MrBayes. #NEXUS begin data; Dimensions ntax=4 nchar=1; Format datatype=dna missing=?; matrix taxon1? taxon2? taxon3? taxon4? ; end; begin mrbayes; set autoclose=yes; lset rates=gamma; prset shapepr=exponential(10.0); mcmcp nruns=1 nchains=1 printfreq=1000; mcmc ngen=10000000 samplefreq=1000; end; You can use the program Tracer to show estimated density: http://evolve.zoo.ox.ac.uk/software.html?id=tracer 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 nodata/nodata.xls 2006 Paul O. Lewis Bayesian Phylogenetics 45 45

V. Bayesian model selection 2006 Paul O. Lewis Bayesian Phylogenetics 46 46

The choice of prior distributions can potentially turn a good model bad! But the prior never allows the parameter out of this box, so in actuality the model performs very poorly LRT, AIC and BIC all say this is a great model because it is able to attain such a high maximum likelihood score 2006 Paul O. Lewis Bayesian Phylogenetics 47 hurtful_prior.ai 47

Marginal probabilities of models Marginal probability of the data (denominator in Bayes' rule). This is a weighted average of the likelihood, where the weights are provided by the prior distribution. Often left out is the fact that we are also conditioning on M, the model used. Pr(D M 1 ) is comparable to Pr(D M 2 ) and thus the marginal probability of the data can be used to compare the average fit of different models as long as the data D is the same. 2006 Paul O. Lewis Bayesian Phylogenetics 48 48

Bayes Factor: 1-param. model Average likelihood = 2006 Paul O. Lewis Bayesian Phylogenetics BF1D.ai 49 49

Bayes Factor: 2-param. model Average likelihood = 2006 Paul O. Lewis Bayesian Phylogenetics BF2D.ai 50 50

Bayes Factor is ratio of marginal model likelihoods 1-parameter model M 0 : (½) L 0 2-parameter model M 1 : (¼) L 1 Bayes Factor favors M 0 unless L 1 is at least twice as large as L 0 All other things equal, more complex models are penalized by their extra dimensions Recent work on Bayes factors with respect to phylogenetics: Huelsenbeck, Larget & Alfaro. 2004. MBE 2004:1123-1133. Lartillot & Phillippe. 2005. Syst. Biol. 55(2):195-207. 2006 Paul O. Lewis Bayesian Phylogenetics 51 51

Marginal Likelihood of a Model sequence length = 1000 sites true branch length = 0.15 true kappa = 4.0 K80 model (entire 2d space) JC69 model (just this 1d line) K80 wins JcvsK2P/JCvsK2P.py 2006 Paul O. Lewis Bayesian Phylogenetics 52 52

Marginal Likelihood of a Model sequence length = 1000 sites true branch length = 0.15 true kappa = 1.0 K80 model (entire 2d space) JC69 model (just this 1d line) JC69 wins 2006 Paul O. Lewis Bayesian Phylogenetics 53 53

Bayesian Information Criterion (BIC) 0.4 0.35 0.3 0.25 0.2 0.15 6 heads/10 flips, Beta(2,2) prior Blue curve is the unnormalized posterior Area under this curve equals marginal probability of the data (the desired quantity) 0.1 0.05-0.05 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Area under pink curve is easy to calculate and is good approximation to the desired quantity (smaller by about 0.5%) Pink curve is a normal distribution scaled to match blue curve as closely as possible Assumes prior is a normal distribution with variance equivalent to the amount of information in a single observation 2006 Paul O. Lewis Bayesian Phylogenetics 54 54

Goodness-of-fit ain't everything 10 y = -1.5972x 5 + 23.167x 4-126.18x 3 + 319.17x 2-369.22x + 155.67 8 6 4 2 0-2.5-2 -1.5-1 -0.5 0 0.5 1 1.5 2 2.5-2 -4 y = 0.6611x - 2E-16-6 (Thanks to Dave Swofford for introducing me to this excellent example) 2006 Paul O. Lewis Bayesian Phylogenetics 55 55

Posterior Predictive Simulation Parameter values sampled during MCMC Tree topologies and branch lengths sampled during MCMC Posterior predictive data sets simulated 2006 Paul O. Lewis Bayesian Phylogenetics 56 56

Minimum Posterior Predictive Loss Approach Perform MCMC on original dataset y During MCMC generate posterior predictive datasets ỹ Find "average" dataset a that is as close as possible to both y and the ỹ Gm measures distance between a and y Pm measures expected distance between a and ỹ Goal is to minimize the overall measure Dm = Gm + Pm Gelfand, A. E., and S. Ghosh. 1998. Model choice: a minimum posterior predictive loss approach. Biometrika 85:1-11. 2006 Paul O. Lewis Bayesian Phylogenetics 57 57

More than one way to be bad a y ỹ i Poor goodness-of-fit (Gm large, Pm small) a y ỹ i High predictive variance (Gm small, Pm large) 2006 Paul O. Lewis Bayesian Phylogenetics 58 58

Regression Example Revisited Pm = 91.2 Gm = 1.2 Dm = 92.4 Pm = 24.5 Gm = 5.3 Dm = 29.8 2006 Paul O. Lewis Bayesian Phylogenetics 59 59

200 Models differ only in prior specification smaller is better 180 160 140 120 100 80 60 40 20 Branch length distribution used for simulations was exponential with mean 0.02 100 4-taxon data sets simulated under JC69 and analyzed with the 9 different branch length prior distributions shown Dm Pm Gm 0 0.00002 0.0002 0.002 0.02 0.2 2 20 200 Hyper Note: none of the standard model testing approaches (AIC, LRT, BIC) work here because these models differ only in their prior specification 2006 Paul O. Lewis Bayesian Phylogenetics 60 60

The End Many thanks to NSF (CIPRES project) for my current funding, and for UConn and the National Evolutionary Synthesis Center (NESCENT) for funding my sabbatical this year. NESCENT Durham, NC 2006 Paul O. Lewis Bayesian Phylogenetics 61 61

The following slides were not shown in the lecture, but they are relevant to the content and are included to provide a more complete record of the main points. 2006 Paul O. Lewis Bayesian Phylogenetics 62 62

Markov chain Monte Carlo (MCMC) 3.36 2.69 2.02 1.35 Histogram is an approximation to the posterior distribution obtained using a Markov chain Monte Carlo simulation. 0.67 0.00 0.00 0.20 0.40 0.60 0.80 1.00 For coin flipping, we can compute posterior probabilities exactly (can even do the necessary integration in Excel!). For complex problems, we must settle for a good approximation. coinflipmcmc.ai 2006 Paul O. Lewis Bayesian Phylogenetics 63 63

Pure random walk Proposal scheme: random direction gamma-distributed step length (mean 45 pixels, s.d. 40 pixels) reflection at edges 354 pixels Target distribution: equal mixture of 3 bivariate normal hills inner contours: 50% outer contours: 95% 392 pixels In this case, the robot is accepting every step 5000 steps shown randomwalk.ai 2006 Paul O. Lewis Bayesian Phylogenetics 64 64

Burn-in Robot is now following the rules and thus quickly finds one of the three hills. Note that first few steps are not at all representative of the distribution. 100 steps taken Starting point mc100.ai 2006 Paul O. Lewis Bayesian Phylogenetics 65 65

Target distribution approximation How good is the MCMC approximation? 51.2% of points are inside inner contours (cf. 50% actual) 93.6% of points are inside outer contours (cf. 95% actual) Approximation gets better the longer the chain is allowed to run. 5000 steps taken 2006 Paul O. Lewis Bayesian Phylogenetics 66 mc5000.ai 66

Just how long is a long run? What would you conclude about the target distribution had you stopped the robot at this point? 1000 steps taken The way to avoid this mistake is to perform several runs, each one beginning from a different randomly-chosen starting point. Results different among runs? Probably none of them were run long enough! mc1000.ai 2006 Paul O. Lewis Bayesian Phylogenetics 67 67

Cold vs. heated landscapes Cold landscape: note peaks separated by deep valleys Heated landscape: note shallow (easy to cross) valleys mchot.ai, mccold.ai 2006 Paul O. Lewis Bayesian Phylogenetics 68 68

Trace plots White noise appearance is a good sign Burn-in is over right about here We started off at a very low point 2006 Paul O. Lewis Bayesian Phylogenetics 69 historyplot.ai 69

Slow mixing Chain is spending long periods of time stuck in one place Indicates step size too large, and most proposed steps would take the robot off the cliff slowmix.ai 2006 Paul O. Lewis Bayesian Phylogenetics 70 70