Problems with Penalised Maximum Likelihood and Jeffrey s Priors to Account For Separation in Large Datasets with Rare Events

Size: px
Start display at page:

Download "Problems with Penalised Maximum Likelihood and Jeffrey s Priors to Account For Separation in Large Datasets with Rare Events"

Transcription

1 Problems with Penalised Maximum Likelihood and Jeffrey s Priors to Account For Separation in Large Datasets with Rare Events Liam F. McGrath September 15, 215 Abstract When separation is a problem in binary dependent variable models many researchers use Firth s penalised maximum likelihood in order to obtain finite estimates (Firth, 1993; Zorn, 25). In this letter I show that this approach can lead to inferences in the opposite direction to that of the separation when the number of observations are sufficiently large and both the dependent and independent variables are rare events. As large datasets with rare events can often occur in political science, such as dyadic data measuring war and treaty formation, a lack of awareness of this problem can lead to inferential issues. Simulated data and an empirical illustration show that the use of independent prior distributions centred at zero, for example the Cauchy prior suggested by Gelman et al. (28), do not result in the problems associated with Firth s penalised maximum likelihood and are thus preferred for point estimates in all cases. Thanks to Carlisle Rainey, Jonah Gabry, and Ben Goodrich for comments and suggestions. Postdoctoral Researcher at Centre for Comparative and International Studies (CIS) and Institute for Environmental Decisions (IED), ETH Zürich. Contact liam.mcgrath@ir.gess.ethz.ch

2 1 Introduction The existence of separation in binary choice models, where an independent variable perfectly predicts a binary dependent variable, is a problem within political science. The default response to this problem suggested by Zorn (25) is the use of penalised maximum likelihood, equivalent to the use of a Jeffrey s prior (Jeffreys, 1946), developed by Firth (1993). In this letter I demonstrate that the reliability of this approach to separation is dependent upon features of the data analysed. The penalised maximum likelihood approach leads to point estimates in the opposite direction to that of the separation, when the number of observations are sufficiently large and the binary dependent and independent variables responsible for separation are rare events. Dyadic data on countries over time is such an example of a large dataset, with a number of observations that is usually in the tens to hundreds of thousands. Furthermore in these cases researchers often focus on rare events both as dependent and independent variables, such as interstate war, possession of nuclear weapons, and the signing of preferential trade agreements. This is a result of the Jeffrey s prior resulting in non-independent prior densities for parameters, which can lead to high prior density for parameters opposite the direction of separation in the case of rare events. To demonstrate these problems I use simulated data and analyse the replication of research on the effect of nuclear weapons upon war by Bell and Miller (213). In contrast to Jeffrey s prior/firth s penalised maximum likelihood which can often lead to inferences in the opposite direction to that of the separation, independent priors such as the Cauchy prior suggested by Gelman et al. (28) ensure that the point estimate remains in the correct direction for all specifications. Researchers should therefore be mindful of the potential of Firth s method to lead to incorrect inferences, and if a default estimator is required instead use the Cauchy prior approach of Gelman et. al. 2 The Problem with Penalised Maximum Likelihood and Jeffrey s Prior Suppose that we are faced with negative quasi-complete separation. That is there exist no observations in the 2 2 table, displayed in table 1, such that x = 1 y = 1. 2

3 Table 1: The Observed Data x = x = 1 y = n 1 n 2 y = 1 n 3 In this single covariate case Firth s Jeffrey s prior approach is equivalent to adding.5 to each cell in table 1 (Zorn, 25), resulting in table 2. Table 2: The Result of Using Jeffrey s Prior x = x = 1 y = n n y = 1 n This solution becomes problematic when it does not maintain the feature that there are relatively more observations where y = 1 when x = than when x = 1. For inferences about the effect of x on y to have the correct sign, it must hold that: n n 1 + n >.5 n (1) From this we can see that rare events in large datasets can violate this inequality. When y is a rare event, then n 3 is small. If x is also a rare event, then n 1 will be very large whilst n 2 remains small. This is important as if n 2 is small, then the addition of.5 to all cells will lead the relative frequency of y = 1 when x = 1 to be larger than when x =, in spite of the data suggesting the opposite. This is due to this relevant proportion being strongly affected due to n 2 being small, whilst the proportion for when x = is mostly unaffected due to n 1 being so large. Table 3: Two Nuke Dyads and War, data from Rauchhaus (29) Not Both Nuclear Both Nuclear No War 454, War 62 3

4 As an illustration consider the case of war between two nuclear powers. Table 3 displays the appropriate frequencies using data from Rauchhaus (29). 1 The relationship between these two variables shows quasi-complete separation, with no observations where both countries had nuclear weapons and went to 62 war. The relevant relative frequencies in this case are p 1 = and p 11 = =. 85 Suppose we were to use the Firth penalised maximum likelihood estimator to examine this bivariate relationship. As noted previously this is akin to adding.5 to each cell. This makes the new relative frequencies: p = and p 11 =.5.6. Thus this approach to dealing with separation 85+1 results in the relative frequency of war in nuclear dyads being six times larger than in dyads where both states do not simultaneously have nuclear weapons. The prior information supplied in the form of the Jeffrey s prior therefore leads to opposite conclusions of the relationship between nuclear weapons and interstate war. More concretely, whilst Jeffrey s prior is non-informative with regard to the baseline probabilities it is ultimately informative about the parameters. This is because the use of Jeffrey s prior results in prior distributions for the constant term and parameter for x that are not independent. Jeffrey s prior gives more (less) prior mass for large positive coefficients for x than equally large negative coefficients for x, when evaluated at a large negative (positive) value for the constant term. This ensures that the joint prior distribution remains uninformative with regard to the baseline probability. This becomes problematic because y is a rare event. As a result the likelihood is high at large negative values for the constant, the point where Jeffrey s prior assigns more mass to large positive values for the coefficient of the variable that leads to separation. Therefore the prior leads to the estimate for the coefficient on x to be positive, even though this is in the direction opposite to that of the separation. Figure 1 illustrates this for the case where n 1 = 5, n 2 = 1, n 3 = 1. From the first panel we can see that Jeffrey s prior gives more mass to positive values for the coefficient on x relative to equally negative values when the constant is negative. As the likelihood has high density at the point where the constant has a large negative value, the Jeffrey s prior pulls the posterior towards being positive even though there is negative separation. Thus the uninformative nature of the Jeffrey s prior leads to inferences opposite to the direction of the 1 This is the original data where the Kargil war is not considered a war. For problems related to this coding see Bell and Miller (213). 4

5 (b) Jeffrey's Prior and the Likelihood (a) Jeffrey's Prior 1 1 Prior Density β 1 5 β Constant Parameter 1 1 Constant Parameter Figure 1: (a) the joint prior density for Jeffrey s prior overlaid with contours of the prior. (b) the prior density for Jeffrey s prior overlaid with the contours of the likelihood. The point indicates the resulting estimate from the penalised maximum likelihood. separation. This problem is resolved if researchers instead use independent prior distributions centred at zero. The Cauchy prior suggested by Gelman et al. (28) is one such prior. 2 They advocate using independent Cauchy distributions as priors for parameters, with a location of zero and scale of 2.5 for independent variables and 1 for the constant. This retains the attractive property that the posterior point estimate of the coefficient for the variable that leads to separation is never in a direction opposite to the separation. 3 Figure 2 illustrates the use of independent Cauchy distributions. As can be seen from the prior, equally large positive and negative values for the coefficient on 2 Whilst suggested by Gelman et al. (28) as a default prior, other research suggests either assessing the sensitivity of inferences to prior choices (Rainey, 214) or instead the use of a different prior distribution as a default prior (Ghosh, Li, and Mitra, 215). In the case of the specific problem outlined here what is important is the use of prior distributions upon the parameters of interest, that are independent and centred at zero, and not on the baseline probability are used as a default. 3 The most extreme case would be as n 1, whilst n 2 and n 3 remain fixed and n 4 =. In this case the likelihood increasingly provides little information. Thus the posterior is approximately the prior, which is centred at zero. 5

6 (b) Cauchy Priors and the Likelihood (a) Cauchy Priors 1 1 Prior Density.1 β.2.3 β Constant Parameter 1 1 Constant Parameter Figure 2: (a) the joint prior density for the Cauchy priors suggested by Gelman et al. (28), overlaid with contours of the prior. (b) the joint prior density for the Cauchy priors overlaid with the contours of the likelihood. The point indicates the resulting posterior mode. x have equal prior density independent of the value of the constant. As a result at the point where the likelihood has high density for the constant, the posterior remains in the direction of the separation as movements towards positive values for the coefficient on x lead to lower prior density. Thus the use of an independent prior density centred at zero ensures that inferences in terms of point estimates remain in the direction of the separation, unlike those of the Jeffery s prior/penalised maximum likelihood. To further illustrate this problem I estimate Firth s penalised maximum likelihood estimator on hypothetical data. 4 Following the definition of cell frequencies in table 1 I create datasets from all possible combinations of: n 1 ={1, 5, 1, 15, 2, 25, 3, 35, 4, 45, 5} n 2 ={5, 1, 2} n 3 ={5, 1, 2} 4 For all estimations of this form in the paper I use the function logistf from the logistf package in R (Heinze et al., 213). 6

7 n β Firth 2 n n1 1 β Cauchy Prior 2 3 n n1 Figure 3: The estimated effects using Firth s penalised maximum likelihood and Gelman et. al s Cauchy prior when x and y are rare events and there is negative quasi-complete separation. The x-axis denotes the number of observations where x = y =, the columns indicate the number of observations where x = 1 y =, and lines denote the number of observations where x = y = 1. I also estimate models using independent prior distributions with a Cauchy prior distribution for β (with location and scale 2.5) suggested by Gelman et al. (28). 5 Figure 3 displays the results of estimating these two models on the simulated data. We can see that when y = 1 x = and y = x = 1 are rare events relative to y = x =, the penalised maximum likelihood estimator can lead to coefficients in the opposite (positive) direction to that of the separation. In contrast using the Cauchy prior ensures that the coefficient on x remains in the same direction of the separation although this parameter asymptotically 5 For all estimations of this form in the paper I use the function bayesglm from the arm package in R (Gelman and Su, 215). This function uses an approximate Expectation-Maximisation algorithm to find the posterior mode and variance. For other posterior summaries, such as the posterior mean or median, researchers should write the model in software for full Bayesian inference such as MCMC pack (Martin, Quinn, and Park, 211) or Stan (Stan Development Team, 214). The drawback in doing so for the rare events in large data sets case discussed here is that this is much slower than the EM algorithm used in the bayesglm function. 7

8 approaches zero. 6 3 Illustration - The Effect of Nuclear Weapons upon Interstate War Given the above results I use the replication of Rauchhaus (29) by Bell and Miller (213) to illustrate the problem with actual data. Bell and Miller (213) correctly note that the estimations of Rauchhaus (29) suffer from separation, and so instead estimate models using Firth s penalised maximum likelihood. Table 4 shows the results of models estimating the bivariate relationship (models 1 and 2), their main estimations (models 3 and 4), and models that do not include the major power variable (models 5 and 6). 7 Table 4: Replication of Bell and Miller (213) (1) (2) (3) (4) (5) (6) Firth Cauchy Prior Firth Cauchy Prior Firth Cauchy Prior Constant [-9.15,-8.65] [-9.15,-8.65] [-6.7,-1.51] [-6.4,-1.7] [-8.2,-3.54] [-7.94,-3.52] twonukedyad [-3.33,3.44] [-4.12,3.36] [-5.64,1.76] [-4.15,1.89] [-3.54,3.41] [-4.9,3.25] onenukedyad [.17,1.66] [.21,1.62] [1.38,2.61] [1.34,2.56] logcapabilityratio [-.9,-.4] [-.85,-.38] [-.61,-.18] [-.59,-.16] Ally [-1.17,.25] [-1.1,.25] [-1.9,.29] [-1.4,.32] SmlDemocracy [-.14,-.1] [-.13,-.1] [-.13,] [-.12,] SmlDependence [ ,-26.93] [-21.73,-2.63] [ ,14.26] [ ,23.69] logdistance [-.95,-.42] [-.94,-.43] [-.68,-.16] [-.69,-.18] Contiguity [2.16,3.71] [2.17,3.64] [2.73,4.31] [2.71,4.26] NIGOs [-.5,-.1] [-.5,-.1] [-.6,-.2] [-.6,-.2] MajorPower [1.54,3.14] [1.5,3.1] n Setting n 1 = 1 1 7, n 2 = 1, n 3 = 1 yields a coefficient of for the Cauchy prior and for the Firth penalised maximum likelihood estimator. Thus even in a case that is unlikely to occur in applied statistics, the Cauchy prior still retains the property of the coefficient for x being in the same direction of the separation. 7 To ensure consistency of observations across the different models I first listwise delete observations based upon the model with the largest number of covariates (models 3 and 4) before estimating the subsequent models. 8

9 The results show that whilst the coefficient is in the same direction of the separation in the full model (model 3), the bivariate model (model 1) and the model without the major powers variable (model 5) lead to the opposite inference. Whilst this is not a problem in this specific application, it is entirely possible that researchers are faced with a situation where no such covariate exists to ensure the coefficient is in the right direction. 8 Therefore researchers should not rely on Firth s penalised maximum likelihood as a default solution to separation in binary choice models. Rather they should use independent prior distributions centred at zero, such as the Cauchy prior suggested by Gelman et al. (28). 9 It should also be noted that despite the coefficient remaining in the same direction to that of the separation when using the Cauchy prior, the distribution of the estimated coefficient has substantial mass in the opposite direction of the separation. In models 2, 4, and 6 approximately 42%, 23% and 41% of the distribution includes positive values, the opposite direction of the observed separation. Thus whilst independent prior distributions are preferred as a default with regard to point estimates, further attention to tailoring the prior to the specific research question and quantity of interest is also desirable as noted by Rainey (214). 4 Conclusion In this letter I have shown that the commonly suggested penalised maximum likelihood approach to separation (Zorn, 25) can lead to point estimates opposite to the direction of separation dependent upon features of the data analysed. This occurs when confronted with separation in datasets with a large number of observations where the dependent and independent variables of interest are rare events. Therefore Firth s penalised maximum likelihood/jeffrey s prior is not a suitable default choice for dealing with separation. Instead the use of independent prior distributions centred at zero, such as the Cauchy prior suggested by Gelman et al. (28), which do not suffer from this problem are preferred as a default. 8 In fact this problem was discovered in a statistical analysis which led to inference opposite to the direction of separation. Importantly there did not exist an independent variable that changed this incorrect inference. 9 This is easily implemented in R using the bayesglm function within the arm package. 9

10 References Bell, Mark S., and Nicholas L. Miller Questioning the Effect of Nuclear Weapons on Conflict. Journal of Conflict Resolution. Firth, David Bias Reduction of Maximum Likelihood Estimates. Biometrika 8(1): pp Gelman, Andrew, Aleks Jakulin, Maria Grazia Pittau, and Yu-Sung Su. 28. A Weakly Informative Default Prior Distribution for Logistic and Other Regression Models. The Annals of Applied Statistics 2(4): Gelman, Andrew, and Yu-Sung Su arm: Data Analysis Using Regression and Multilevel/Hierarchical Models. Ghosh, J., Y. Li, and R. Mitra On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression. ArXiv e-prints (July). Heinze, Georg, Meinhard Ploner, Daniela Dunkler, and Harry Southworth logistf: Firth s bias reduced logistic regression. Jeffreys, Harold An Invariant Form for the Prior Probability in Estimation Problems. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 186(17): pp Martin, Andrew D., Kevin M. Quinn, and Jong Hee Park MCMCpack: Markov Chain Monte Carlo in R. Journal of Statistical Software 42(9): 22. Rainey, Carlisle Dealing with Separation in Logistic Regression Models. Working Paper. Rauchhaus, Robert. 29. Evaluating the Nuclear Peace Hypothesis: A Quantitative Approach. Journal of Conflict Resolution 53(2): Stan Development Team RStan: the R interface to Stan, Version Zorn, Christopher. 25. A Solution to Separation in Binary Response Models. Political Analysis 13(2):

Firth's penalized likelihood logistic regression: accurate effect estimates AND predictions?

Firth's penalized likelihood logistic regression: accurate effect estimates AND predictions? Firth's penalized likelihood logistic regression: accurate effect estimates AND predictions? Angelika Geroldinger, Rainer Puhr, Mariana Nold, Lara Lusa, Georg Heinze 18.3.2016 DAGStat 2016 Statistics under

More information

Accurate Prediction of Rare Events with Firth s Penalized Likelihood Approach

Accurate Prediction of Rare Events with Firth s Penalized Likelihood Approach Accurate Prediction of Rare Events with Firth s Penalized Likelihood Approach Angelika Geroldinger, Daniela Dunkler, Rainer Puhr, Rok Blagus, Lara Lusa, Georg Heinze 12th Applied Statistics September 2015

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

R-squared for Bayesian regression models

R-squared for Bayesian regression models R-squared for Bayesian regression models Andrew Gelman Ben Goodrich Jonah Gabry Imad Ali 8 Nov 2017 Abstract The usual definition of R 2 (variance of the predicted values divided by the variance of the

More information

Penalized likelihood logistic regression with rare events

Penalized likelihood logistic regression with rare events Penalized likelihood logistic regression with rare events Georg Heinze 1, Angelika Geroldinger 1, Rainer Puhr 2, Mariana Nold 3, Lara Lusa 4 1 Medical University of Vienna, CeMSIIS, Section for Clinical

More information

0.1 poisson.bayes: Bayesian Poisson Regression

0.1 poisson.bayes: Bayesian Poisson Regression 0.1 poisson.bayes: Bayesian Poisson Regression Use the Poisson regression model if the observations of your dependent variable represents the number of independent events that occur during a fixed period

More information

Dealing with Separation in Logistic Regression Models

Dealing with Separation in Logistic Regression Models Dealing with Separation in Logistic Regression Models Carlisle Rainey September 7, 2015 Working paper: Comments welcome! ABSTRACT When facing small numbers of observations or rare events, political scientists

More information

0.1 normal.bayes: Bayesian Normal Linear Regression

0.1 normal.bayes: Bayesian Normal Linear Regression 0.1 normal.bayes: Bayesian Normal Linear Regression Use Bayesian regression to specify a continuous dependent variable as a linear function of specified explanatory variables. The model is implemented

More information

Bayesian leave-one-out cross-validation for large data sets

Bayesian leave-one-out cross-validation for large data sets 1st Symposium on Advances in Approximate Bayesian Inference, 2018 1 5 Bayesian leave-one-out cross-validation for large data sets Måns Magnusson Michael Riis Andersen Aki Vehtari Aalto University mans.magnusson@aalto.fi

More information

Monte Carlo in Bayesian Statistics

Monte Carlo in Bayesian Statistics Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview

More information

Downloaded from:

Downloaded from: Camacho, A; Kucharski, AJ; Funk, S; Breman, J; Piot, P; Edmunds, WJ (2014) Potential for large outbreaks of Ebola virus disease. Epidemics, 9. pp. 70-8. ISSN 1755-4365 DOI: https://doi.org/10.1016/j.epidem.2014.09.003

More information

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Statistical Methods in Particle Physics Lecture 1: Bayesian methods Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

An Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models

An Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models Proceedings 59th ISI World Statistics Congress, 25-30 August 2013, Hong Kong (Session CPS023) p.3938 An Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models Vitara Pungpapong

More information

Quantile POD for Hit-Miss Data

Quantile POD for Hit-Miss Data Quantile POD for Hit-Miss Data Yew-Meng Koh a and William Q. Meeker a a Center for Nondestructive Evaluation, Department of Statistics, Iowa State niversity, Ames, Iowa 50010 Abstract. Probability of detection

More information

Weakly informative priors

Weakly informative priors Department of Statistics and Department of Political Science Columbia University 23 Apr 2014 Collaborators (in order of appearance): Gary King, Frederic Bois, Aleks Jakulin, Vince Dorie, Sophia Rabe-Hesketh,

More information

Introduction. Chapter 1

Introduction. Chapter 1 Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Mixture modelling of recurrent event times with long-term survivors: Analysis of Hutterite birth intervals. John W. Mac McDonald & Alessandro Rosina

Mixture modelling of recurrent event times with long-term survivors: Analysis of Hutterite birth intervals. John W. Mac McDonald & Alessandro Rosina Mixture modelling of recurrent event times with long-term survivors: Analysis of Hutterite birth intervals John W. Mac McDonald & Alessandro Rosina Quantitative Methods in the Social Sciences Seminar -

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Bayesian inference for multivariate extreme value distributions

Bayesian inference for multivariate extreme value distributions Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

The Problem of Modeling Rare Events in ML-based Logistic Regression s Assessing Potential Remedies via MC Simulations

The Problem of Modeling Rare Events in ML-based Logistic Regression s Assessing Potential Remedies via MC Simulations The Problem of Modeling Rare Events in ML-based Logistic Regression s Assessing Potential Remedies via MC Simulations Heinz Leitgöb University of Linz, Austria Problem In logistic regression, MLEs are

More information

Supplemental Information

Supplemental Information Supplemental Information Rewards for Ratification: Payoffs for Participating in the International Human Rights Regime? Richard Nielsen Assistant Professor, Department of Political Science Massachusetts

More information

Bayesian Classification Methods

Bayesian Classification Methods Bayesian Classification Methods Suchit Mehrotra North Carolina State University smehrot@ncsu.edu October 24, 2014 Suchit Mehrotra (NCSU) Bayesian Classification October 24, 2014 1 / 33 How do you define

More information

Statistical Practice

Statistical Practice Statistical Practice A Note on Bayesian Inference After Multiple Imputation Xiang ZHOU and Jerome P. REITER This article is aimed at practitioners who plan to use Bayesian inference on multiply-imputed

More information

Visualization in Bayesian workflow

Visualization in Bayesian workflow Visualization in Bayesian workflow Jonah Gabry Department of Statistics and ISERP, Columbia University, New York, USA. E-mail: jgabry@gmail.com Daniel Simpson Department of Statistical Sciences, University

More information

Comparing Priors in Bayesian Logistic Regression for Sensorial Classification of Rice

Comparing Priors in Bayesian Logistic Regression for Sensorial Classification of Rice Comparing Priors in Bayesian Logistic Regression for Sensorial Classification of Rice INTRODUCTION Visual characteristics of rice grain are important in determination of quality and price of cooked rice.

More information

Contents. Part I: Fundamentals of Bayesian Inference 1

Contents. Part I: Fundamentals of Bayesian Inference 1 Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian

More information

Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method

Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Madeleine B. Thompson Radford M. Neal Abstract The shrinking rank method is a variation of slice sampling that is efficient at

More information

A Note on Bayesian Inference After Multiple Imputation

A Note on Bayesian Inference After Multiple Imputation A Note on Bayesian Inference After Multiple Imputation Xiang Zhou and Jerome P. Reiter Abstract This article is aimed at practitioners who plan to use Bayesian inference on multiplyimputed datasets in

More information

A default prior distribution for logistic and other regression models

A default prior distribution for logistic and other regression models A default prior distribution for logistic and other regression models Andrew Gelman, Aleks Jakulin, Maria Grazia Pittau, and Yu-Sung Su September 12, 2006 Abstract We propose a new prior distribution for

More information

Using Bayesian Priors for More Flexible Latent Class Analysis

Using Bayesian Priors for More Flexible Latent Class Analysis Using Bayesian Priors for More Flexible Latent Class Analysis Tihomir Asparouhov Bengt Muthén Abstract Latent class analysis is based on the assumption that within each class the observed class indicator

More information

So the Reviewer Told You to Use a Selection Model? Selection Models and the Study of International Relations

So the Reviewer Told You to Use a Selection Model? Selection Models and the Study of International Relations So the Reviewer Told You to Use a Selection Model? Selection Models and the Study of International Relations Patrick T. Brandt School of Economic, Political and Policy Sciences University of Texas at Dallas

More information

Statistical Methods. Missing Data snijders/sm.htm. Tom A.B. Snijders. November, University of Oxford 1 / 23

Statistical Methods. Missing Data  snijders/sm.htm. Tom A.B. Snijders. November, University of Oxford 1 / 23 1 / 23 Statistical Methods Missing Data http://www.stats.ox.ac.uk/ snijders/sm.htm Tom A.B. Snijders University of Oxford November, 2011 2 / 23 Literature: Joseph L. Schafer and John W. Graham, Missing

More information

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for

More information

Weakly informative priors

Weakly informative priors Department of Statistics and Department of Political Science Columbia University 21 Oct 2011 Collaborators (in order of appearance): Gary King, Frederic Bois, Aleks Jakulin, Vince Dorie, Sophia Rabe-Hesketh,

More information

Machine Learning Linear Classification. Prof. Matteo Matteucci

Machine Learning Linear Classification. Prof. Matteo Matteucci Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)

More information

Deciding, Estimating, Computing, Checking

Deciding, Estimating, Computing, Checking Deciding, Estimating, Computing, Checking How are Bayesian posteriors used, computed and validated? Fundamentalist Bayes: The posterior is ALL knowledge you have about the state Use in decision making:

More information

Deciding, Estimating, Computing, Checking. How are Bayesian posteriors used, computed and validated?

Deciding, Estimating, Computing, Checking. How are Bayesian posteriors used, computed and validated? Deciding, Estimating, Computing, Checking How are Bayesian posteriors used, computed and validated? Fundamentalist Bayes: The posterior is ALL knowledge you have about the state Use in decision making:

More information

Random Effects Models for Network Data

Random Effects Models for Network Data Random Effects Models for Network Data Peter D. Hoff 1 Working Paper no. 28 Center for Statistics and the Social Sciences University of Washington Seattle, WA 98195-4320 January 14, 2003 1 Department of

More information

Comparing MLE, MUE and Firth Estimates for Logistic Regression

Comparing MLE, MUE and Firth Estimates for Logistic Regression Comparing MLE, MUE and Firth Estimates for Logistic Regression Nitin R Patel, Chairman & Co-founder, Cytel Inc. Research Affiliate, MIT nitin@cytel.com Acknowledgements This presentation is based on joint

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

EXAMINATION: QUANTITATIVE EMPIRICAL METHODS. Yale University. Department of Political Science

EXAMINATION: QUANTITATIVE EMPIRICAL METHODS. Yale University. Department of Political Science EXAMINATION: QUANTITATIVE EMPIRICAL METHODS Yale University Department of Political Science January 2014 You have seven hours (and fifteen minutes) to complete the exam. You can use the points assigned

More information

Probabilistic machine learning group, Aalto University Bayesian theory and methods, approximative integration, model

Probabilistic machine learning group, Aalto University  Bayesian theory and methods, approximative integration, model Aki Vehtari, Aalto University, Finland Probabilistic machine learning group, Aalto University http://research.cs.aalto.fi/pml/ Bayesian theory and methods, approximative integration, model assessment and

More information

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3 University of California, Irvine 2017-2018 1 Statistics (STATS) Courses STATS 5. Seminar in Data Science. 1 Unit. An introduction to the field of Data Science; intended for entering freshman and transfers.

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data?

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data? When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data? Kosuke Imai Department of Politics Center for Statistics and Machine Learning Princeton University Joint

More information

The STS Surgeon Composite Technical Appendix

The STS Surgeon Composite Technical Appendix The STS Surgeon Composite Technical Appendix Overview Surgeon-specific risk-adjusted operative operative mortality and major complication rates were estimated using a bivariate random-effects logistic

More information

How likely is Simpson s paradox in path models?

How likely is Simpson s paradox in path models? How likely is Simpson s paradox in path models? Ned Kock Full reference: Kock, N. (2015). How likely is Simpson s paradox in path models? International Journal of e- Collaboration, 11(1), 1-7. Abstract

More information

Or How to select variables Using Bayesian LASSO

Or How to select variables Using Bayesian LASSO Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO On Bayesian Variable Selection

More information

Bayesian Methods in Multilevel Regression

Bayesian Methods in Multilevel Regression Bayesian Methods in Multilevel Regression Joop Hox MuLOG, 15 september 2000 mcmc What is Statistics?! Statistics is about uncertainty To err is human, to forgive divine, but to include errors in your design

More information

Model Checking. Patrick Lam

Model Checking. Patrick Lam Model Checking Patrick Lam Outline Posterior Predictive Distribution Posterior Predictive Checks An Example Outline Posterior Predictive Distribution Posterior Predictive Checks An Example Prediction Once

More information

Bayesian Inference: Probit and Linear Probability Models

Bayesian Inference: Probit and Linear Probability Models Utah State University DigitalCommons@USU All Graduate Plan B and other Reports Graduate Studies 5-1-2014 Bayesian Inference: Probit and Linear Probability Models Nate Rex Reasch Utah State University Follow

More information

Bayesian model selection: methodology, computation and applications

Bayesian model selection: methodology, computation and applications Bayesian model selection: methodology, computation and applications David Nott Department of Statistics and Applied Probability National University of Singapore Statistical Genomics Summer School Program

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

PACKAGE LMest FOR LATENT MARKOV ANALYSIS

PACKAGE LMest FOR LATENT MARKOV ANALYSIS PACKAGE LMest FOR LATENT MARKOV ANALYSIS OF LONGITUDINAL CATEGORICAL DATA Francesco Bartolucci 1, Silvia Pandofi 1, and Fulvia Pennoni 2 1 Department of Economics, University of Perugia (e-mail: francesco.bartolucci@unipg.it,

More information

On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression

On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression Joyee Ghosh, Yingbo Li, Robin Mitra arxiv:157.717v2 [stat.me] 9 Feb 217 Abstract In logistic regression, separation occurs when

More information

DEPARTMENT OF ECONOMICS COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

DEPARTMENT OF ECONOMICS COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND DEPARTME OF ECONOMICS COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CAERBURY CHRISTCHURCH, NEW ZEALAND A MOE CARLO EVALUATION OF THE EFFICIENCY OF THE PCSE ESTIMATOR by Xiujian Chen Department of Economics

More information

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15

More information

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Monte Carlo Studies. The response in a Monte Carlo study is a random variable. Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating

More information

arxiv: v5 [stat.me] 9 Jun 2018

arxiv: v5 [stat.me] 9 Jun 2018 Visualization in Bayesian workflow arxiv:1709.01449v5 [stat.me] 9 Jun 2018 Jonah Gabry Department of Statistics and ISERP, Columbia University, New York, USA. E-mail: jonah.sol.gabry@columbia.edu Daniel

More information

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data?

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data? When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data? Kosuke Imai Department of Politics Center for Statistics and Machine Learning Princeton University

More information

poisson: Some convergence issues

poisson: Some convergence issues The Stata Journal (2011) 11, Number 2, pp. 207 212 poisson: Some convergence issues J. M. C. Santos Silva University of Essex and Centre for Applied Mathematics and Economics Colchester, United Kingdom

More information

Subjective and Objective Bayesian Statistics

Subjective and Objective Bayesian Statistics Subjective and Objective Bayesian Statistics Principles, Models, and Applications Second Edition S. JAMES PRESS with contributions by SIDDHARTHA CHIB MERLISE CLYDE GEORGE WOODWORTH ALAN ZASLAVSKY \WILEY-

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

Type of manuscript: Original Article

Type of manuscript: Original Article Page 1 of 17 Type of manuscript: Original Article Full title: The Firth bias correction, penalization, and weakly informative priors: A case for log-f priors in logistic and related regressions Short title:

More information

Informative g-priors for Logistic Regression

Informative g-priors for Logistic Regression Informative g-priors for Logistic Regression Hanson, T. E., Branscum, A. J., & Johnson, W. O. (2014). Informative g-priors for Logistic Regression. Bayesian Analysis, 9(3), 597-612. doi:10.1214/14-ba868

More information

Bayesian Hierarchical Models

Bayesian Hierarchical Models Bayesian Hierarchical Models Gavin Shaddick, Millie Green, Matthew Thomas University of Bath 6 th - 9 th December 2016 1/ 34 APPLICATIONS OF BAYESIAN HIERARCHICAL MODELS 2/ 34 OUTLINE Spatial epidemiology

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters

Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters Journal of Modern Applied Statistical Methods Volume 13 Issue 1 Article 26 5-1-2014 Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters Yohei Kawasaki Tokyo University

More information

1 Inference for binomial proportion (Matlab/Python)

1 Inference for binomial proportion (Matlab/Python) Bayesian data analysis exercises from 2015 1 Inference for binomial proportion (Matlab/Python) Algae status is monitored in 274 sites at Finnish lakes and rivers. The observations for the 2008 algae status

More information

Joint longitudinal and time-to-event models via Stan

Joint longitudinal and time-to-event models via Stan Joint longitudinal and time-to-event models via Stan Sam Brilleman 1,2, Michael J. Crowther 3, Margarita Moreno-Betancur 2,4,5, Jacqueline Buros Novik 6, Rory Wolfe 1,2 StanCon 2018 Pacific Grove, California,

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Tools for Parameter Estimation and Propagation of Uncertainty

Tools for Parameter Estimation and Propagation of Uncertainty Tools for Parameter Estimation and Propagation of Uncertainty Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Outline Models, parameters, parameter estimation,

More information

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California Texts in Statistical Science Bayesian Ideas and Data Analysis An Introduction for Scientists and Statisticians Ronald Christensen University of New Mexico Albuquerque, New Mexico Wesley Johnson University

More information

An introduction to Bayesian reasoning in particle physics

An introduction to Bayesian reasoning in particle physics An introduction to Bayesian reasoning in particle physics Graduiertenkolleg seminar, May 15th 2013 Overview Overview A concrete example Scientific reasoning Probability and the Bayesian interpretation

More information

Discussion of Maximization by Parts in Likelihood Inference

Discussion of Maximization by Parts in Likelihood Inference Discussion of Maximization by Parts in Likelihood Inference David Ruppert School of Operations Research & Industrial Engineering, 225 Rhodes Hall, Cornell University, Ithaca, NY 4853 email: dr24@cornell.edu

More information

A NOTE ON ROBUST ESTIMATION IN LOGISTIC REGRESSION MODEL

A NOTE ON ROBUST ESTIMATION IN LOGISTIC REGRESSION MODEL Discussiones Mathematicae Probability and Statistics 36 206 43 5 doi:0.75/dmps.80 A NOTE ON ROBUST ESTIMATION IN LOGISTIC REGRESSION MODEL Tadeusz Bednarski Wroclaw University e-mail: t.bednarski@prawo.uni.wroc.pl

More information

1 Motivation for Instrumental Variable (IV) Regression

1 Motivation for Instrumental Variable (IV) Regression ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data

More information

A Use of the Information Function in Tailored Testing

A Use of the Information Function in Tailored Testing A Use of the Information Function in Tailored Testing Fumiko Samejima University of Tennessee for indi- Several important and useful implications in latent trait theory, with direct implications vidualized

More information

Bayesian Inference for the Multivariate Normal

Bayesian Inference for the Multivariate Normal Bayesian Inference for the Multivariate Normal Will Penny Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK. November 28, 2014 Abstract Bayesian inference for the multivariate

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

Appendix: Modeling Approach

Appendix: Modeling Approach AFFECTIVE PRIMACY IN INTRAORGANIZATIONAL TASK NETWORKS Appendix: Modeling Approach There is now a significant and developing literature on Bayesian methods in social network analysis. See, for instance,

More information

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /rssa.

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /rssa. Goldstein, H., Carpenter, J. R., & Browne, W. J. (2014). Fitting multilevel multivariate models with missing data in responses and covariates that may include interactions and non-linear terms. Journal

More information

Using Estimating Equations for Spatially Correlated A

Using Estimating Equations for Spatially Correlated A Using Estimating Equations for Spatially Correlated Areal Data December 8, 2009 Introduction GEEs Spatial Estimating Equations Implementation Simulation Conclusion Typical Problem Assess the relationship

More information

The Polya-Gamma Gibbs Sampler for Bayesian. Logistic Regression is Uniformly Ergodic

The Polya-Gamma Gibbs Sampler for Bayesian. Logistic Regression is Uniformly Ergodic he Polya-Gamma Gibbs Sampler for Bayesian Logistic Regression is Uniformly Ergodic Hee Min Choi and James P. Hobert Department of Statistics University of Florida August 013 Abstract One of the most widely

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 1 Bootstrapped Bias and CIs Given a multiple regression model with mean and

More information

Christopher Dougherty London School of Economics and Political Science

Christopher Dougherty London School of Economics and Political Science Introduction to Econometrics FIFTH EDITION Christopher Dougherty London School of Economics and Political Science OXFORD UNIVERSITY PRESS Contents INTRODU CTION 1 Why study econometrics? 1 Aim of this

More information

A better way to bootstrap pairs

A better way to bootstrap pairs A better way to bootstrap pairs Emmanuel Flachaire GREQAM - Université de la Méditerranée CORE - Université Catholique de Louvain April 999 Abstract In this paper we are interested in heteroskedastic regression

More information

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the

More information

A Flexible Class of Models for Data Arising from a thorough QT/QTc study

A Flexible Class of Models for Data Arising from a thorough QT/QTc study A Flexible Class of Models for Data Arising from a thorough QT/QTc study Suraj P. Anand and Sujit K. Ghosh Department of Statistics, North Carolina State University, Raleigh, North Carolina, USA Institute

More information

arxiv: v1 [stat.co] 2 Nov 2017

arxiv: v1 [stat.co] 2 Nov 2017 Binary Bouncy Particle Sampler arxiv:1711.922v1 [stat.co] 2 Nov 217 Ari Pakman Department of Statistics Center for Theoretical Neuroscience Grossman Center for the Statistics of Mind Columbia University

More information

Experimental Design and Data Analysis for Biologists

Experimental Design and Data Analysis for Biologists Experimental Design and Data Analysis for Biologists Gerry P. Quinn Monash University Michael J. Keough University of Melbourne CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv I I Introduction 1 1.1

More information

Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point

Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point David Harris, Steve Leybourne, Robert Taylor Monash U., U. of Nottingam, U. of Essex Economics

More information

Bayesian Statistics: An Introduction

Bayesian Statistics: An Introduction : An Introduction Tim Frasier Copyright Tim Frasier This work is licensed under the Creative Commons Attribution 4.0 International license. Click here for more information. Outline 1. Bayesian statistics,

More information