PIER HLM Course July 30, 2011 Howard Seltman. Discussion Guide for Bayes and BUGS
|
|
- Lora Bridges
- 5 years ago
- Views:
Transcription
1 PIER HLM Course July 30, 2011 Howard Seltman Discussion Guide for Bayes and BUGS 1. Classical Statistics is based on parameters as fixed unknown values. a. The standard approach is to try to discover, e.g., whether some parameter is equal to zero or not. The steps are: i. First posit a probability model (and corresponding assumptions) and null and alternative hypotheses. ii. Next, choose a test statistic that tends to differ based on whether the null or alternative hypothesis is correct. iii. Next, calculate the distribution of the test statistic over repeated theoretical repetitions of the experiment (sampling distribution). iv. Finally, make a (practical) decision about retaining or rejecting the null hypothesis by comparing the observed statistic to its null sampling distribution (or inverting the test to obtain a confidence interval). v. This may be supplemented by power analysis. vi. Correct classical interpretation must be limited to something like this: If my assumptions are true, then, over the long run (my lifetime of experiments) 100 % of the times when I happen to run experiments for which H 0 is true, on average I will falsely reject H 0, and 100(1- % of the time I will correctly retain H 0. Also, 100 % of the times when I happen to run experiments for which H 0 is false to the specific degree used in calculating power (1- ), on average I will falsely retain H 0, and 100(1- % of the time I will correctly reject H 0. If the true effect size is larger than that used in my power calculation I will make fewer type 2 errors, and for smaller effects sizes, I will make more type 2 errors. vii. Illustration: In a series of careers with 40 null experiments and 100 experiments with 80% power for the smallest interesting effect size (optimistic!!), on average we expect to see 2 false positives, 38 true negatives, 80 true positives, and 20 false negatives. From this we can calculate that the PV+ = 80/(80+2)=97.5%, and the PV- = 38/(38+20)=65.5%. Every different lifetime experiment description gives a different pair of predictive values. Beyond that, heed the warning: due to dumb luck, results may vary. 1
2 2. Bayesian Statistics is based on parameters having a probability distribution. a. Example: We want to know the population difference between the math score of students taught with methods A and B, say = A - B. b. In classical statistics these parameters are fixed unknowns, and 1) we either want to make a decision on whether or not they are equal (and that decision is either true or false; we never know for sure), or 2) we want to construct, say, a 95% confidence interval for, say [L,U] for which, over our lifetime, when our assumptions are met, on average, only about 5% of our 95% Cis will not contain the true value. The CI is random, and the parameter is fixed. c. In Bayesian statistics, we start with a prior distribution for which reflects our beliefs about before we run the experiment. This can be based on earlier experiments and/or can be subjective based on looser information, say all of our reading about similar methods. In the well-justified subjective approach we may elicit a prior from experts. Also, different experts (or non-experts) may validly hold different prior beliefs, which are operationalized as different prior distributions. If little pertinent prior information is available, it is appropriate to express our uncertainty as a weak (dispersed) prior distribution, e.g., ~N(0, s.d.=100 points). An alternative is a non-informative prior, which is an objective off-the-shelf distribution, e.g., all values of are equally likely before running the experiment. These non-informative priors are often improper, i.e., not a valid probability distribution, which causes problems only in some situations (such as multilevel models). Conjugate priors simplify the analysis by matching the likelihood. Generally, it is a good idea to perform sensitivity analysis to investigate how sensitive vs. robust the findings are to the chosen specification of the prior distribution. d. The goal of a Bayesian analysis is to use the experimental results to create a posterior distribution for the quantities of interest (e.g., the above) that express what we should (in the technical sense) believe about now that the experiment is complete. 2
3 e. One tool of Bayesian statistics is Bayes theorem, which can be expressed as: P( Y) = P(Y )P( ) / Σ P(Y )P( ) where the summation (or integral) is over all possible values of. Here is the parameter of interest, Y is the data, P( ) is the prior distribution of the parameter, P(Y ) is the likelihood, i.e., it expresses how likely the experimental outcome is for any given value of the parameter according to the model, and P( Y) is the posterior distribution of the parameter. i. Example: Three kinds of coins are manufactured with long-run heads probabilities of 1=0.3, 2=0.5, and 3=0.7. Assume we know that fair coins are manufactured at four times the rate of the other two. We will flip the coin 5 times and count the number of heads (Y). Let be the true heads chance of the single coin we have. We can express P( )={0.3, 0.5, 0.7} with corresponding prior probabilities of {1/6, 2/3, 1/6} or {0.167, 0.667, 0.167}. Using the binomial theorem we know that: P(Y ) = (5 choose Y) Y (1- ) 5-Y so that P(Y=2 0.3 or 0.5 or 0.7) is 10 x x = or 10 x x = or 10 x x = If we observe Y=2, then the denominator sum in Bayes formula is (1/6) (2/3) (1/6)= and the posterior probabilities are: P( =0.3 Y=2) = (1/6) / = 0.18 P( =0.5 Y=2) = (2/3) / = 0.74 P( =0.7 Y=2) = (1/6) / = 0.08 f. The posterior distributions are the main conclusions of a Bayesian analysis. Unlike classical analysis, because parameters have distributions you can directly and correctly state that, e.g., the probability that is between 5.0 and 10.0 is 95%, or that the probability that is greater than zero is 56%. Derived posterior probabilities of quantities that combine parameters are easy. Probabilistic model comparison is easy through Bayes factors. Models can also be combined for Bayesian model averaging. 3
4 g. Although direct calculation of posterior distributions is sometimes possible, in practice for most complex problems a sample from the posterior distribution is generated and all practical results can be obtained from this posterior sample. The main computational tools for generating a posterior sample are the Gibbs Sampler and MCMC (Markov Chain Monte Carlo), often in combination. Briefly, the Gibbs Sampler breaks up large problems into smaller more manageable pieces, and MCMC is a general purpose way to generate a value from a particular posterior distribution more-or-less directly from the likelihood and prior distribution without any probability calculations. Although there are many practical difficulties that often require knowledge and experience to solve, the use of MCMC allows very complex problems to be analyzed, including many situations where the classical sampling distributions are intractable. h. The main tool for constructing models and putting them into the Bayesian calculation apparatus is the directed acyclic graph (DAG) which pictorially represents the data and parameters and their relationships, particularly conditional independence. 4
5 3. BUGS and rube a. Documentation: b. BUGS is Windows only, popular, and free. You can set up models via a GUI or by entering computer code. My package, rube, allows running BUGS from inside R, plus it provides extra functionality such as meaningful error messages, and onthe-fly changing of IVs including interactions c. Here is a GUI generated DAG for a simple school example: 5
6 d. Here is the code for a complete example (schoolmodel.txt): # M is the number of neighborhoods # NN is the total number of students # nbhd[j] is the neighborhood for student j # Y[j] is the score (outcome) for student j model { for (i in 1:M) { betaclass[i] ~ dnorm(muc, precc) } for (j in 1 : NN ) { ability[j] <- betaclass[nbhd[j]] + betap*pretest[j] + betam*male[j] + betas*nses[nbhd[j]] + betac*ncrime[nbhd[j]] test[j] ~ dnorm(ability[j], precerr)i(0,100) } muc ~ dnorm(50, 0.001) precc ~ dgamma(0.0001, ) precerr ~ dgamma(0.0001, ) betap ~ dnorm(0, 0.001) betam ~ dnorm(0, 0.001) betas ~ dnorm(0, 0.001) betac ~ dnorm(0, 0.001) sdclass <- sqrt(1/precc) sderr <- sqrt(1/precerr) } e. Here is code to run this using rube (MixedBugs.R) # Submit school/neighborhood data to BUGS (crossed hierarchical model) # Starting values (from lmer(); should be estimated from data) thisinit = function() { list(muc=rnorm(1,69), precc=rnorm(1,1/8^2,0.02), precerr=ronrm(1,1/10^2,0.02), betap=rnorm(1,5,0.2), betam=rnorm(1,-1,0.2), betas=rnorm(1,3,0.2), betac=rnorm(1,-8,0.2), betaclass=rnorm(m,0,0.5))} # Data snd = read.table("schoolneighbor.dat", header=t) M=length(sizes) first = c(1, cumsum(sizes)[-m]+1) NN=nrow(data) thisdata=list(m=m, NN=NN, nbhd=snd$neighborhoodid, test=snd$test, pretest=snd$pretest, male=snd$male, Nses=snd$Nses[first], Ncrime=snd$Ncrime[first]) 6
7 # Run the model through WinBUGS require("rube") rube("schoolmodel.txt", data=thisdata, inits=thisinit) Rube Results: Constants: Size Min Max Mean SD NAs M NN nbhd pretest male Nses Ncrime Data: Distr Size NAs Initial Value(s) [Range] Flags test dnorm_i(0,100) / [28, 100] Stochastics: Distr Size Parameters Initial Value(s) [Range] betaclass dnorm 13 muc, precc / [-0.591, 1.352] muc dnorm 50, precc dgamma 1 1e-04, 1e precerr dgamma 1 1e-04, 1e betap dnorm 1 0, betam dnorm 1 0, betas dnorm 1 0, betac dnorm 1 0, f. Here are the results: Equivalent lmer() code: L1 = lmer(test ~ male+pretest+nses+ncrime+(1 classid) + (1+neighborhoodID), data=hw4) # # Random effects: # Groups Name Variance Std.Dev. # neighborhoodid (Intercept) # Residual # Number of obs: 468, groups: neighborhoodid, 13 # # Fixed effects: # Estimate Std. Error t value # (Intercept) # male # pretest # Nses # Ncrime
8 density ACF betap rube results: my.params=c("sdclass","sderr","betap","betam","betas","betac") rslt = rube("schoolmodel.txt", data=thisdata, inits=thisinit, parameters.to.save=my.params, n.iter=3000, n.burnin=1000, n.thin=1) rslt Rube Results: Run at :04 and taking 3.39 secs mean sd MCMCerr 2.5% 25% 50% 75% 97.5% Rhat n.eff sdclass sderr betap betam betas betac betap ~ dnorm iteration number Rhat= Lag betap 8
9 g. Comments i. Bayesian analysis, e.g., using BUGS, is always appropriate if you are philosophically Bayesian. ii. Bayesian analysis matches classical analysis in many ways in many circumstances, e.g., with very weak or uninformative prior distributions. iii. Bayesian analysis is appropriate when you want to incorporate prior information in your analysis. iv. Bayesian analysis tends to more appropriately model all sources of variation. v. Bayesian analysis is often a first choice for unusual models that have no existing software. vi. Many difficulties can arise, such as slow convergence, highly correlated posteriors, choice of MCMC proposal distributions, choice of blocks of parameters to update simultaneously, and difficulty in specifying complex priors. 9
36-463/663Multilevel and Hierarchical Models
36-463/663Multilevel and Hierarchical Models From Bayes to MCMC to MLMs Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 Outline Bayesian Statistics and MCMC Distribution of Skill Mastery in a Population
More informationBayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007
Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationBayesian Statistics. State University of New York at Buffalo. From the SelectedWorks of Joseph Lucke. Joseph F. Lucke
State University of New York at Buffalo From the SelectedWorks of Joseph Lucke 2009 Bayesian Statistics Joseph F. Lucke Available at: https://works.bepress.com/joseph_lucke/6/ Bayesian Statistics Joseph
More informationThe Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.
Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface
More informationBUGS Bayesian inference Using Gibbs Sampling
BUGS Bayesian inference Using Gibbs Sampling Glen DePalma Department of Statistics May 30, 2013 www.stat.purdue.edu/~gdepalma 1 / 20 Bayesian Philosophy I [Pearl] turned Bayesian in 1971, as soon as I
More information(1) Introduction to Bayesian statistics
Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they
More informationIntroduction: MLE, MAP, Bayesian reasoning (28/8/13)
STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this
More informationBayesian Methods in Multilevel Regression
Bayesian Methods in Multilevel Regression Joop Hox MuLOG, 15 september 2000 mcmc What is Statistics?! Statistics is about uncertainty To err is human, to forgive divine, but to include errors in your design
More information10. Exchangeability and hierarchical models Objective. Recommended reading
10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.
More informationSTAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01
STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist
More informationBayesian philosophy Bayesian computation Bayesian software. Bayesian Statistics. Petter Mostad. Chalmers. April 6, 2017
Chalmers April 6, 2017 Bayesian philosophy Bayesian philosophy Bayesian statistics versus classical statistics: War or co-existence? Classical statistics: Models have variables and parameters; these are
More informationIntroduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016
Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationWhy Bayesian approaches? The average height of a rare plant
Why Bayesian approaches? The average height of a rare plant Estimation and comparison of averages is an important step in many ecological analyses and demographic models. In this demonstration you will
More informationConfidence Intervals. CAS Antitrust Notice. Bayesian Computation. General differences between Bayesian and Frequntist statistics 10/16/2014
CAS Antitrust Notice Bayesian Computation CAS Centennial Celebration and Annual Meeting New York, NY November 10, 2014 Brian M. Hartman, PhD ASA Assistant Professor of Actuarial Science University of Connecticut
More informationBayesian Graphical Models
Graphical Models and Inference, Lecture 16, Michaelmas Term 2009 December 4, 2009 Parameter θ, data X = x, likelihood L(θ x) p(x θ). Express knowledge about θ through prior distribution π on θ. Inference
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationWeakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationBayesian Phylogenetics:
Bayesian Phylogenetics: an introduction Marc A. Suchard msuchard@ucla.edu UCLA Who is this man? How sure are you? The one true tree? Methods we ve learned so far try to find a single tree that best describes
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationSTT 315 Problem Set #3
1. A student is asked to calculate the probability that x = 3.5 when x is chosen from a normal distribution with the following parameters: mean=3, sd=5. To calculate the answer, he uses this command: >
More informationBayesian Computation
Bayesian Computation CAS Centennial Celebration and Annual Meeting New York, NY November 10, 2014 Brian M. Hartman, PhD ASA Assistant Professor of Actuarial Science University of Connecticut CAS Antitrust
More informationLearning Sequence Motif Models Using Expectation Maximization (EM) and Gibbs Sampling
Learning Sequence Motif Models Using Expectation Maximization (EM) and Gibbs Sampling BMI/CS 776 www.biostat.wisc.edu/bmi776/ Spring 009 Mark Craven craven@biostat.wisc.edu Sequence Motifs what is a sequence
More information36-463/663: Hierarchical Linear Models
36-463/663: Hierarchical Linear Models Taste of MCMC / Bayes for 3 or more levels Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 Outline Practical Bayes Mastery Learning Example A brief taste of JAGS
More informationLecture 6. Prior distributions
Summary Lecture 6. Prior distributions 1. Introduction 2. Bivariate conjugate: normal 3. Non-informative / reference priors Jeffreys priors Location parameters Proportions Counts and rates Scale parameters
More informationUnobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:
Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed
More informationBayesian RL Seminar. Chris Mansley September 9, 2008
Bayesian RL Seminar Chris Mansley September 9, 2008 Bayes Basic Probability One of the basic principles of probability theory, the chain rule, will allow us to derive most of the background material in
More informationBiol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference
Biol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference By Philip J. Bergmann 0. Laboratory Objectives 1. Learn what Bayes Theorem and Bayesian Inference are 2. Reinforce the properties of Bayesian
More informationProbabilistic Machine Learning
Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent
More informationMAE 493G, CpE 493M, Mobile Robotics. 6. Basic Probability
MAE 493G, CpE 493M, Mobile Robotics 6. Basic Probability Instructor: Yu Gu, Fall 2013 Uncertainties in Robotics Robot environments are inherently unpredictable; Sensors and data acquisition systems are
More informationBiol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference Fall 2016
Biol 206/306 Advanced Biostatistics Lab 12 Bayesian Inference Fall 2016 By Philip J. Bergmann 0. Laboratory Objectives 1. Learn what Bayes Theorem and Bayesian Inference are 2. Reinforce the properties
More informationBayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida
Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:
More informationA Bayesian Approach to Phylogenetics
A Bayesian Approach to Phylogenetics Niklas Wahlberg Based largely on slides by Paul Lewis (www.eeb.uconn.edu) An Introduction to Bayesian Phylogenetics Bayesian inference in general Markov chain Monte
More informationPart 8: GLMs and Hierarchical LMs and GLMs
Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course
More informationDeciding, Estimating, Computing, Checking
Deciding, Estimating, Computing, Checking How are Bayesian posteriors used, computed and validated? Fundamentalist Bayes: The posterior is ALL knowledge you have about the state Use in decision making:
More informationDeciding, Estimating, Computing, Checking. How are Bayesian posteriors used, computed and validated?
Deciding, Estimating, Computing, Checking How are Bayesian posteriors used, computed and validated? Fundamentalist Bayes: The posterior is ALL knowledge you have about the state Use in decision making:
More informationBayesian Inference for Regression Parameters
Bayesian Inference for Regression Parameters 1 Bayesian inference for simple linear regression parameters follows the usual pattern for all Bayesian analyses: 1. Form a prior distribution over all unknown
More informationLecture 6: Markov Chain Monte Carlo
Lecture 6: Markov Chain Monte Carlo D. Jason Koskinen koskinen@nbi.ku.dk Photo by Howard Jackman University of Copenhagen Advanced Methods in Applied Statistics Feb - Apr 2016 Niels Bohr Institute 2 Outline
More informationLinear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model
Regression: Part II Linear Regression y~n X, 2 X Y Data Model β, σ 2 Process Model Β 0,V β s 1,s 2 Parameter Model Assumptions of Linear Model Homoskedasticity No error in X variables Error in Y variables
More informationBasic Statistics. 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation).
Basic Statistics There are three types of error: 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation). 2. Systematic error - always too high or too low
More informationMathematical Notation Math Introduction to Applied Statistics
Mathematical Notation Math 113 - Introduction to Applied Statistics Name : Use Word or WordPerfect to recreate the following documents. Each article is worth 10 points and should be emailed to the instructor
More information1 Hypothesis testing for a single mean
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
More informationReducing The Computational Cost of Bayesian Indoor Positioning Systems
Reducing The Computational Cost of Bayesian Indoor Positioning Systems Konstantinos Kleisouris, Richard P. Martin Computer Science Department Rutgers University WINLAB Research Review May 15 th, 2006 Motivation
More informationWeakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0
More informationPhylogenetics: Bayesian Phylogenetic Analysis. COMP Spring 2015 Luay Nakhleh, Rice University
Phylogenetics: Bayesian Phylogenetic Analysis COMP 571 - Spring 2015 Luay Nakhleh, Rice University Bayes Rule P(X = x Y = y) = P(X = x, Y = y) P(Y = y) = P(X = x)p(y = y X = x) P x P(X = x 0 )P(Y = y X
More informationBayesian Inference: Concept and Practice
Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of
More information10/4/2013. Hypothesis Testing & z-test. Hypothesis Testing. Hypothesis Testing
& z-test Lecture Set 11 We have a coin and are trying to determine if it is biased or unbiased What should we assume? Why? Flip coin n = 100 times E(Heads) = 50 Why? Assume we count 53 Heads... What could
More informationFundamental Probability and Statistics
Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are
More informationBayesian Inference for Normal Mean
Al Nosedal. University of Toronto. November 18, 2015 Likelihood of Single Observation The conditional observation distribution of y µ is Normal with mean µ and variance σ 2, which is known. Its density
More informationStatistics 251: Statistical Methods
Statistics 251: Statistical Methods 1-sample Hypothesis Tests Module 9 2018 Introduction We have learned about estimating parameters by point estimation and interval estimation (specifically confidence
More informationIntroduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models
Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Matthew S. Johnson New York ASA Chapter Workshop CUNY Graduate Center New York, NY hspace1in December 17, 2009 December
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More information(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis
Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals
More informationAdvanced Statistical Modelling
Markov chain Monte Carlo (MCMC) Methods and Their Applications in Bayesian Statistics School of Technology and Business Studies/Statistics Dalarna University Borlänge, Sweden. Feb. 05, 2014. Outlines 1
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a
More informationBayesian analysis in nuclear physics
Bayesian analysis in nuclear physics Ken Hanson T-16, Nuclear Physics; Theoretical Division Los Alamos National Laboratory Tutorials presented at LANSCE Los Alamos Neutron Scattering Center July 25 August
More informationLecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1
Lecture 2 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationBayesian Analysis for Natural Language Processing Lecture 2
Bayesian Analysis for Natural Language Processing Lecture 2 Shay Cohen February 4, 2013 Administrativia The class has a mailing list: coms-e6998-11@cs.columbia.edu Need two volunteers for leading a discussion
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 11 January 7, 2013 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline How to communicate the statistical uncertainty
More informationStatistics in Environmental Research (BUC Workshop Series) II Problem sheet - WinBUGS - SOLUTIONS
Statistics in Environmental Research (BUC Workshop Series) II Problem sheet - WinBUGS - SOLUTIONS 1. (a) The posterior mean estimate of α is 14.27, and the posterior mean for the standard deviation of
More informationStatistics 301: Probability and Statistics 1-sample Hypothesis Tests Module
Statistics 301: Probability and Statistics 1-sample Hypothesis Tests Module 9 2018 Student s t graphs For the heck of it: x
More informationMarkov Chain Monte Carlo
Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).
More informationMCMC for Cut Models or Chasing a Moving Target with MCMC
MCMC for Cut Models or Chasing a Moving Target with MCMC Martyn Plummer International Agency for Research on Cancer MCMSki Chamonix, 6 Jan 2014 Cut models What do we want to do? 1. Generate some random
More informationMultilevel Statistical Models: 3 rd edition, 2003 Contents
Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction
More informationChapter 5: HYPOTHESIS TESTING
MATH411: Applied Statistics Dr. YU, Chi Wai Chapter 5: HYPOTHESIS TESTING 1 WHAT IS HYPOTHESIS TESTING? As its name indicates, it is about a test of hypothesis. To be more precise, we would first translate
More informationIntroduction to Bayesian Statistics 1
Introduction to Bayesian Statistics 1 STA 442/2101 Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 42 Thomas Bayes (1701-1761) Image from the Wikipedia
More informationHypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006
Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)
More informationCIVL /8904 T R A F F I C F L O W T H E O R Y L E C T U R E - 8
CIVL - 7904/8904 T R A F F I C F L O W T H E O R Y L E C T U R E - 8 Chi-square Test How to determine the interval from a continuous distribution I = Range 1 + 3.322(logN) I-> Range of the class interval
More informationA Brief and Friendly Introduction to Mixed-Effects Models in Linguistics
A Brief and Friendly Introduction to Mixed-Effects Models in Linguistics Cluster-specific parameters ( random effects ) Σb Parameters governing inter-cluster variability b1 b2 bm x11 x1n1 x21 x2n2 xm1
More informationBayesian Networks in Educational Assessment
Bayesian Networks in Educational Assessment Estimating Parameters with MCMC Bayesian Inference: Expanding Our Context Roy Levy Arizona State University Roy.Levy@asu.edu 2017 Roy Levy MCMC 1 MCMC 2 Posterior
More informationMetropolis-Hastings Algorithm
Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to
More informationCS 361: Probability & Statistics
March 14, 2018 CS 361: Probability & Statistics Inference The prior From Bayes rule, we know that we can express our function of interest as Likelihood Prior Posterior The right hand side contains the
More informationContents. Part I: Fundamentals of Bayesian Inference 1
Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian
More informationCENTRAL LIMIT THEOREM (CLT)
CENTRAL LIMIT THEOREM (CLT) A sampling distribution is the probability distribution of the sample statistic that is formed when samples of size n are repeatedly taken from a population. If the sample statistic
More informationBAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA
BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationCS 361: Probability & Statistics
October 17, 2017 CS 361: Probability & Statistics Inference Maximum likelihood: drawbacks A couple of things might trip up max likelihood estimation: 1) Finding the maximum of some functions can be quite
More informationIntroducing Generalized Linear Models: Logistic Regression
Ron Heck, Summer 2012 Seminars 1 Multilevel Regression Models and Their Applications Seminar Introducing Generalized Linear Models: Logistic Regression The generalized linear model (GLM) represents and
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More information(4) One-parameter models - Beta/binomial. ST440/550: Applied Bayesian Statistics
Estimating a proportion using the beta/binomial model A fundamental task in statistics is to estimate a proportion using a series of trials: What is the success probability of a new cancer treatment? What
More informationIntroduction to Bayesian Learning
Course Information Introduction Introduction to Bayesian Learning Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Apprendimento Automatico: Fondamenti - A.A. 2016/2017 Outline
More informationDirected Graphical Models
Directed Graphical Models Instructor: Alan Ritter Many Slides from Tom Mitchell Graphical Models Key Idea: Conditional independence assumptions useful but Naïve Bayes is extreme! Graphical models express
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationLecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1
Lecture 5 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationMachine Learning
Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 13, 2011 Today: The Big Picture Overfitting Review: probability Readings: Decision trees, overfiting
More informationAdvanced Probabilistic Modeling in R Day 1
Advanced Probabilistic Modeling in R Day 1 Roger Levy University of California, San Diego July 20, 2015 1/24 Today s content Quick review of probability: axioms, joint & conditional probabilities, Bayes
More informationRon Heck, Fall Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October 20, 2011)
Ron Heck, Fall 2011 1 EDEP 768E: Seminar in Multilevel Modeling rev. January 3, 2012 (see footnote) Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October
More informationTutorial on Probabilistic Programming with PyMC3
185.A83 Machine Learning for Health Informatics 2017S, VU, 2.0 h, 3.0 ECTS Tutorial 02-04.04.2017 Tutorial on Probabilistic Programming with PyMC3 florian.endel@tuwien.ac.at http://hci-kdd.org/machine-learning-for-health-informatics-course
More informationDS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling
DS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling Due: Tuesday, May 10, 2016, at 6pm (Submit via NYU Classes) Instructions: Your answers to the questions below, including
More informationData Analysis and Uncertainty Part 2: Estimation
Data Analysis and Uncertainty Part 2: Estimation Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics in Estimation 1. Estimation 2. Desirable
More informationChapter Three. Hypothesis Testing
3.1 Introduction The final phase of analyzing data is to make a decision concerning a set of choices or options. Should I invest in stocks or bonds? Should a new product be marketed? Are my products being
More informationBayesian Analysis. Justin Chin. Spring 2018
Bayesian Analysis Justin Chin Spring 2018 Abstract We often think of the field of Statistics simply as data collection and analysis. While the essence of Statistics lies in numeric analysis of observed
More informationORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing
ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.
More informationST 740: Model Selection
ST 740: Model Selection Alyson Wilson Department of Statistics North Carolina State University November 25, 2013 A. Wilson (NCSU Statistics) Model Selection November 25, 2013 1 / 29 Formal Bayesian Model
More informationRobust Monte Carlo Methods for Sequential Planning and Decision Making
Robust Monte Carlo Methods for Sequential Planning and Decision Making Sue Zheng, Jason Pacheco, & John Fisher Sensing, Learning, & Inference Group Computer Science & Artificial Intelligence Laboratory
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationMCMC and Gibbs Sampling. Kayhan Batmanghelich
MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction
More information