Bayesian Methods for Testing Axioms of Measurement

Size: px
Start display at page:

Download "Bayesian Methods for Testing Axioms of Measurement"

Transcription

1 Bayesian Methods for Testing Axioms of Measurement George Karabatsos University of Illinois-Chicago University of Minnesota Quantitative/Psychometric Methods Area Department of Psychology April 3, 2015, Friday. Supported by NSF-MMS Research Grants SES and SES

2 Outline I. Introduction: Axioms of Measurement. II. A. Relations of Axioms to IRT models. B. Rasch, 2PL, Monotone Homogeneity and Double-Monotone IRT models. General Bayesian Model for Axiom Testing A. Model Estimation (MCMC). B. Axiom Testing Procedures III. Empirical Illustrations of Bayesian Axiom Testing. a) Convict data (orig. analyzed by Perline Wright & Wainer, 1979, APM). b) NAEP reading test data IV. Dealing with axiom violations: A Bayesian Nonparametric outlier-robust IRT model with application to teacher preparation survey from PIRLS. V. Extensions of the Bayesian axiom testing model. VI. Conclusions 2

3 I. Introduction IRT models aim to represent, via model parameters, persons (examinees) and items on ordinal or interval scales of measurement. In IRT practice, such measurement scales are assumed for the parameters. The ability to represent persons and items on ordinal or interval scales depends on the data satisfying a set of key cancellation axioms (Luce & Tukey, 1964, JMP). These axioms are deterministic, but we can state these axioms in more probabilistic terms, as follows. We first briefly consider the deterministic case, to motivate the probabilistic approach. 3

4 I. (Deterministic) Axioms of Measurement Levels of the column variable j = i = 0 Y(0,1) Y(0,2) Y(0,3) Y(0,4) Y(0,5) Y(0,6) 1 Y(1,1) Y(1,2) Y(1,3) Y(1,4) Y(1,5) Y(1,6) 2 Y(2,1) Y(2,2) Y(2,3) Y(2,4) Y(2,5) Y(2,6) Levels of the row variable 3 Y(3,1) Y(3,2) Y(3,3) Y(3,4) Y(3,5) Y(3,6) 4 Y(4,1) Y(4,2) Y(4,3) Y(4,4) Y(4,5) Y(4,6) 5 Y(5,1) Y(5,2) Y(5,3) Y(5,4) Y(5,5) Y(5,6) 6 Y(6,1) Y(6,2) Y(6,3) Y(6,4) Y(6,5) Y(6,6) 4

5 I. Deterministic Single Cancellation Axiom Levels of the column variable j = i = 0 Y(0,1) Y(0,2) Y(0,3) Y(0,4) Y(0,5) Y(0,6) 1 Y(1,1) Y(1,2) Y(1,3) Y(1,4) Y(1,5) Y(1,6) 2 Y(2,1) Y(2,2) Y(2,3) Y(2,4) Y(2,5) Y(2,6) Each column: Premise Implication Each row: Levels of the row variable 3 Y(3,1) Y(3,2) Y(3,3) Y(3,4) Y(3,5) Y(3,6) 4 Y(4,1) Y(4,2) Y(4,3) Y(4,4) Y(4,5) Y(4,6) Premise Implication 5 Y(5,1) Y(5,2) Y(5,3) Y(5,4) Y(5,5) Y(5,6) 6 Y(6,1) Y(6,2) Y(6,3) Y(6,4) Y(6,5) Y(6,6) Like a Guttman scale (1950) 5

6 I. Probabilistic Measurement Theory Test Items in easiness order j = i = Define: ij Ability Level (test score) Probability that person with score level i answers item j correctly

7 I. Single Cancellation Axiom (rows) Test Items in easiness order j = Each row: i = Premise Implication Ability Level (test score)

8 I. Single Cancellation Axiom (rows) Key axiom for representing person ability (test score) on an ordinal scale. All Item Response Theory Models, which are of the form Pr(Y j = 1 ) = G j () for non-decreasing G j : R [0,1], assume this axiom. Examples of such IRT models: 1PL Rasch model: Pr(Y j = 1 ) = exp( j ) / [1+ exp( j )] 2PL: Pr(Y j = 1 ) = exp(a j { j }) / [1 + exp(a j { j })] 3PL: Pr(Y j = 1 ) = c j + (1 c j ) / [1 + exp(a j { j })] MH Model: Pr(Y j = 1 ) is non-decreasing in. DM Model: Pr(Y j = 1 ) is non-decreasing in, AND IIO: Pr(Y 1 = 1 ) < Pr(Y 2 = 1 ) < < Pr(Y J = 1 ) for all. 8

9 I. Single Cancellation Axiom Test Items in easiness order j = Each row: i = Premise Ability Level (test score) Implication Each column: Premise Implication

10 I. Single Cancellation Axiom Key axiom for representing person ability (test score) and item easiness (difficulty) on a common ordinal scale. Examples of IRT models that (fully) assume single cancellation: 1PL Rasch model: Pr(Y j = 1 ) = exp( j ) / [1+ exp( j )] OPLM model: Pr(Y j = 1 ) = exp({ j }) / [1+ exp({ j })] DM Model: Pr(Y j = 1 ) is non-decreasing in, and IIO: Pr(Y 1 = 1 ) < Pr(Y 2 = 1 ) < < Pr(Y J = 1 ) for all. 10

11 I. Double Cancellation Axiom Test Items in easiness order j = i = Premise Ability Level (test score) Implication Axiom must hold for all 3 3 submatrices

12 I. Triple Cancellation Axiom Test Items in easiness order j = i = Premise Ability Level (test score) Implication Axiom must hold for all 4 4 submatrices

13 I. Single, Double, Triple, and all higher order cancellation axioms Key axioms for representing person ability (test score) and item easiness (difficulty) on a common interval scale. All these axioms, together, are axioms for additive conjoint measurement. Examples of IRT models that (fully) assume single cancellation: 1PL Rasch model (logistic): Pr(Y j = 1 ) = exp( j ) / [1+ exp( j )] Any 1PL model of the form: Pr(Y j = 1 ) = G( j ), for non-decreasing G: R [0,1] common to all test items. All previous discussions about measurement axioms and IRT also apply to polytomous IRT models. 13

14 How to Test Measurement Axioms? Even the probabilistic measurement axioms are deterministic. They assert deterministic order relations among probabilities. Perline, Wright & Wainer (PWW; 1979, APM), to test the Rasch model, analyzed data from a 10-item dichotomous-scored test administered to 2500 released convicts (from Hoffman & Beck, 1974). The test inquires about the subject s criminal history. PWW tested the conjoint measurement axioms on real data, by counting the number of axiom violations. For example, the number of rows violating single cancellation and, the number of 3 3 submatrices violating double cancellation. This axiom testing approach does not distinguish between small and large axiom violations. We illustrate this issue now. 14

15 True or Random Violation of the Single Cancellation Axiom? 15

16 True or Random Violation of the Single and Double Cancellation Axioms? Apparent single cancellation axiom violations in red Apparent double cancellation axiom violations in purple 16

17 How to Test Measurement Axioms? The number of axiom violations, as a statistic, has an intractable sampling distribution, for the purposes of hypothesis testing. The false discovery rate approach to multiple testing (Benjamini & Hochberg, 1995, JRSSB) is not easily applicable because the different axioms such as single cancellation and double cancellation are dependent of on other. 17

18 II. Bayesian Model for Axiom Testing Data likelihood: The Data: n = (n ij ) (I+1)J, n ij : # correct in test score group i for item j N = (N ij ) (I+1)J, N ij : # in test score group i who completed item j MLE: p = (p ij ) (I+1) J = (n ij / N ij ) (I+1)J. Prior density, i.e., set of axioms: I i0 I i0 J j1 J j1 Ln N, i0 Example: single cancellation axiom (rows & columns), I be( a,b): beta p.d.f. Be( a,b): beta c.d.f. Be 1 (u a,b): quantile. 1( A) = 1 if A. 1( A) = 0 if A. Often in practice, a = b =1 (truncated uniform prior) or a = b =½ (truncated reference prior). A = { : ij < i+1,j for i = 0,1,, I 1 & ij < i,j+1 for j =1,, J 1} (i: test score level; j indexes item in item easiness order) 18 J j1 be ij a ij,b ij 1 A be ij a ij,b ij 1 Ad N ij n ij n ij ij 1 ij N ijn ij

19 II. Bayesian Model for Axiom Testing Posterior Density (Distribution): Ln N, N,n,A Ln N, d I i0 I i0 J j1 J j1 N ij n ij N ij n ij n ij ij 1 ij N ijn ij be ij a ij,b ij 1 A n ij ij 1 ij N ijn ij be ij a ij,b ij 1 Ad I i0 J j1 be ij a ij n ij,b ij N ij n ij 1 A I i0 J j1 be ij a ij,b ij 1 A 19

20 II. Bayesian Model for Axiom Testing Posterior Density (Distribution): (c.d.f. ( N, n, A) ) N,n,A I i0 I i0 J j1 J j1 Posterior cannot be numerically evaluated. N ij n ij N ij n ij n ij ij 1 ij N ijn ij be ij a ij,b ij 1 A n ij ij 1 ij N ijn ij be ij a ij,b ij 1 Ad MCMC full conditional posterior p.d.f.s (f.c.p.s): π(θ ij N, n, θ \ij ) be(θ ij a ij + n ij, b ij + N ij n ij )1(θ A), i, j Each MCMC sampling iteration: For every pair i, j in turn, update/sample θ ij by drawing u ij ~ U(0,1), and then taking: ij Be 1 Be min ij a ij, b ij u ij Be max ij a ij, b ij Be min ij a ij, b ij (inverse c.d.f. sampling method; Devroye, 1986). As # of MCMC iterations S gets larger, the MCMC chain {θ (s) } s=1,..,s converges to samples from the posterior distribution (θ N, n, A). a ij, b ij 20

21 II. Bayesian Model for Axiom Testing Possible ways to test axioms from model: 1. Check if p ij = n ij / N ij is within 95% posterior interval of the marginal posterior distribution (θ ij N, n, A). Decide violation of axiom(s) if p ij is located outside the 95% posterior interval. 2. Compute the posterior predictive p-value (Karabatsos Sheu 2004 APM): pvalue ij 1 2 with: p rep ij ; ij 2 p ij ; ij p rep ij N,n,Adp rep ij d 2 p ij ; ij N ijp ij N ij ij 2 N ij ij n rep ij N ij, ij bin ij N ij, ij, with p rep ij n rep ij /N ij Decide violations of axioms if pvalue ij <.05. (or smaller) 21

22 II. Bayesian Model for Axiom Testing Possible ways to test axioms from model (continued): 3. Consider the Deviance Information Criterion (DIC) DIC D 2 D D Deviance: D 2 I i0 J j1 n ij log ij N ij n ij log1 ij log Deviance at posterior mean: D DE N,n,A Posterior mean of deviance: D Dd N,n,A D is goodness (badness) of fit term. 2 D D is model flexibility penalty, given by 2 times the effective number of model parameters. Consider DIC(A) of model under axiom (order) constraints, and DIC(U) for unconstrained model (no order constraints). Decide violations of axiom(s) if DIC(A) > DIC(U). 22 N ij n ij

23 Apparent single cancellation axiom violations in red 23

24 Test of single cancellation (over rows only) No significant violation of single cancellation over rows. results from Karabatsos (2001, JAM) 24

25 Test of single cancellation (over rows and columns) Significant violation of single cancellation axiom results from Karabatsos (2001, JAM) 25

26 True or Random Violation of the Single and Double Cancellation Axioms? Apparent single cancellation axiom violations in red Apparent double cancellation axiom violation in purple 26

27 Significant violation of single and double cancellation axiom Test of single and double cancellation 27 (Karabatsos, 2001, JAM)

28 NAEP data 100 examinees 6 items results from Karabatsos & Sheu (2004, APM) NAEP reading test data Posterior Predictive Chi-square test of single cancellation (over rows). Violations indicated by bold. 28 George Karabatsos, 3/27/2015

29 NAEP data 100 examinees 6 items results from Karabatsos & Sheu (2004, APM) Posterior Predictive Chi-square test of single cancellation (over columns). Violations indicated by bold. 29

30 IV. Dealing With Axiom Violations We have seen from the previous two empirical applications that the measurement axioms can be violated, even from data arising from carefully-constructed tests. One way to deal with the problem is by defining a more flexible IRT model that can handle outliers. A flexible Bayesian Nonparametric outlier-robust IRT model. Will present and briefly illustrate the model through the analysis of data arising from a teacher preparation survey from PIRLS. 244 respondents (teachers). Each rated (0-2) own level of teacher preparation on 10 items: CERTIFICATE, LANGUAGE, LITERATURE, PEDAGOGY, PSYCHOLOGY, REMEDIAL, THEORY, LANGDEV, SPED, SECLANG. Also included covariates AGE, FEMALE, Miss:FEMALE. 30

31 P fd X; p1 J j1 fy pj x pi ; BNP-IRT model Karabatsos (2015, Handbook of Modern IRT) fy pj x pj ; PY pj 1 x pj ; y pj 1 PY pj 1 x pj ; 1y pj PrY 1 x; 1 F 0 x; fy x; dy k x;, 0 ny k x, 2 j x;, dy k k x k 1 x k, 2 N k 0, 2 U 0,b, N 0, 2 vdiag,j NJ N 0, 2 v I NJ1 2, 2 IG 2 a 0 /2,a 0 /2IG 2 a /2,a /2. 0 Persons (examinees) indexed by p = 1,,P Test items indexed by j = 1,,J 31

32 32

33 Absolutely no item response outliers under the BNP-IRT model. 33

34 beta0 beta:certificate(1) beta:language(1) beta:literature(1) beta:pedagogy(1) beta:psychology(1) beta:remedial(1) beta:theory(1) beta:langdev(1) beta:sped(1) beta:seclang(1) beta:age(1) beta:female(1) beta:miss:female(1) beta:certificate(2) beta:language(2) beta:literature(2) beta:pedagogy(2) beta:psychology(2) beta:remedial(2) beta:theory(2) beta:langdev(2) beta:sped(2) beta:seclang(2) beta:age(2) beta:female(2) beta:miss:female(2) sigma^2 sigma^2_mu beta_w0 beta_w:certificate(1) beta_w:language(1) beta_w:literature(1) beta_w:pedagogy(1) beta_w:psychology(1) beta_w:remedial(1) beta_w:theory(1) beta_w:langdev(1) beta_w:sped(1) beta_w:seclang(1) beta_w:age(1) beta_w:female(1) beta_w:miss:female(1) beta_w:certificate(2) beta_w:language(2) beta_w:literature(2) beta_w:pedagogy(2) beta_w:psychology(2) beta_w:remedial(2) beta_w:theory(2) beta_w:langdev(2) beta_w:sped(2) beta_w:seclang(2) beta_w:age(2) beta_w:female(2) beta_w:miss:female(2) sigma^2_w Value Dependent variable = itemrespvs0 For BNP-IRT model, boxplot of the marginal posterior distributions of the item, covariate, and prior parameters. -5 The estimated posterior means of the person ability parameters were found to be distributed with mean.00, s.d..46, minimum.66, and maximum 3.68 for the 244 persons. 34

35 V. Conclusions The ability to measure persons and/or items on an ordinal or interval scale depends on data satisfying a hierarchy of conjoint measurement axioms, including single, double, triple cancelation, and higher order cancellation conditions. We presented Bayesian model that can represent a set of one or more axioms in terms of order constraints on binomial parameters, with the constraints enforced by the prior distribution. This model provided a coherent approach to test the measurement axioms on real data sets. 35

36 V. Conclusions Applications of the Bayesian axiom testing model showed that the measurement axioms can be violated from data arising even from carefully constructed tests. As a possible remedy to this issue, we propose a more flexible, BNP-IRT model that can provide estimates of person and item parameters that are robust to any item response outliers in the data. In a sense the BNP-IRT model is not wrong for the data; It is a highly flexible model which makes rather irrelevant the practice of model-checking or axiom testing or model fit analysis. For related arguments, see Karabatsos & Walker 2009, BJMSP). 36

37 V. Conclusions The Bayesian axiom testing model of Karabatsos (2001), was later used to -- test decision theory axioms (e.g., Myung et al., 2005, JMP); -- test measurement axioms (e.g., Kyngdon, 2011; Domingue 2012). The latter author suggested a minor modification to the MH algorithm of Karabatsos (2001) to handle more orderings under double cancellation.; Like Karabatsos & Sheu (2004), this talk focused on a Gibbs sampler which is usually preferable to a rejection sampler like the MH algorithm, for MCMC practice. etc. Karabatsos (2005, JMP) defined binomial parameter as the probability of choice that satisfied an axiom. Then under a conjugate beta prior for, we may directly calculate a Bayes factor to test the axiom (H 0 ) according to H 0 : > c versus H 1 : < c for some large c, such as

38 Extensions of Axiom Testing Model (1) Allow for random orderings for the cancellation axioms. Consider the joint posterior distribution: (,,, A, Y, N, n) = (,, A, Y) ( N, n, A, ) given Rasch model: Posterior distribution:, Y y pj NJ As before, N,n,A, i0 I PY pj 1 p, j N J p1 j1 N J p1 j1 J j1 exp p j y pj 1 exp p j n, 0,I NJ exp p j y pj exp p j 1 exp p j 1 exp p j dn, 0,I NJ be ij a ij,b ij 1 A, A, is the random linear rank ordering that the matrix ( P(Y pj = 1,) ) NJ induces on = ( ij ) (I+1)J. This ordering automatically satisfies all cancellation axioms. 38

39 Extensions of Axiom Testing Model (1) Then the joint posterior distribution: (,,, A, Y, N, n) can be estimated by using the usual MCMC methods. For each stage of the MCMC chain, {( (s), (s), (s), A, (s) )} s=1:s, the Gibbs sampler (inverse c.d.f.) method would be used to provides a Gibbs sampling update for (s), based on the updated ordering A, (s). Then the Bayesian axiom tests as before, but now they are based marginalizing these tests over the posterior distribution of A,. 39

40 Extensions of Axiom Testing Model (2) Extend the independent (truncated) Beta priors for the ij s namely ~ i j Be( ij a, b) 1( A) to a prior defined by a discrete mixture of beta distributions. = ( ij ) (I+1)J ~ iid i j Be( ij a, b)dg(a, b) 1( A), G ~ DP(,G 0 ) where E[G(a, b)] = G 0 (a, b) := N 2 (log(a),log(b) 0, V) Var[G(a, b)] = G 0 (a, b) [1G 0 (a, b)] / ( + 1) Any smooth distribution defined on (0,1) can be approximated arbitrarily-well by a suitable mixture of beta distributions. Such a prior would define a more flexible Bayesian axiom testing model, based on a richer class of prior distributions. 40

41 Other Work / Collaborations Bayesian nonparametric inference of distribution function under stochastic ordering: F 1 < F 2 < < F K (Karabatsos & Walker, 2007, SPL). o Considered Bernstein polynomial priors and Polya tree priors for the Fs. In each case, posterior inference based on order-constrained beta posterior distributions (as in Karabatsos 2001). Bayesian nonparametric score equating model using a novel dependent Bernstein-Dirichlet polynomial prior for the test score distribution functions (F X, F Y ) used for equipercentile equating (Karabatsos & Walker, 2009, Psychometrika). Bayesian inference for test theory without an answer key (Karabatsos & Batchelder, 2003, Psychometrika). Comparison of 36 person fit statistics (Karabatsos 2003, AME). 41

42 Other Work / Collaborations Karabatsos, G., and Walker, S.G. (2012). A Bayesian nonparametric causal model. J Statistical Planning & Inference. o DP mixture of propensity score models for causal inference in nonrandomized studies. Karabatsos, G., and Walker, S.G. (2012). Bayesian nonparametric mixed random utility models. Computational Statistics & Data Analysis, 56, o In terms of an IRT model, provides a DP infinite-mixture of nominal response models, with person and item parameters subject to the infinite-mixture. Fujimoto, K., and Karabatsos, G. (2014). Dependent Dirichlet Process Rating Model (DDP-RM). Applied Psychological Measurement, 38, o Model allows for clustering of ordinal category thresholds. o Ken Fujimoto: former Ph.D. student. Now faculty at Loyola U. Chicago 42

43 Other Work / Collaborations Karabatsos, G., and Walker, S.G. (2012). Adaptive-Modal Bayesian Nonparametric Regression (EJS). o IRT version of this model, mentioned in this talk, to appear in Handbook Of Item Response Theory (2015). o Model extended to meta analysis: Karabatsos, G., Walker, S.G., and Talbott, E. (2014). A Bayesian nonparametric regression model for meta-analysis. Research Synthesis Methods. o Model extended for causal inference in non-randomized, regression discontinuity designs: (Karabatsos & Walker, 2015; (to appear in Müller and R. Mitra (Eds.), Nonparametric Bayesian Methods in Biostatistics and Bioinformatics). 43

Bayesian Nonparametric Rasch Modeling: Methods and Software

Bayesian Nonparametric Rasch Modeling: Methods and Software Bayesian Nonparametric Rasch Modeling: Methods and Software George Karabatsos University of Illinois-Chicago Keynote talk Friday May 2, 2014 (9:15-10am) Ohio River Valley Objective Measurement Seminar

More information

The Rasch Model, Additive Conjoint Measurement, and New Models of Probabilistic Measurement Theory

The Rasch Model, Additive Conjoint Measurement, and New Models of Probabilistic Measurement Theory JOURNAL OF APPLIED MEASUREMENT, 2(4), 389 423 Copyright 2001 The Rasch Model, Additive Conjoint Measurement, and New Models of Probabilistic Measurement Theory George Karabatsos LSU Health Sciences Center

More information

Part 8: GLMs and Hierarchical LMs and GLMs

Part 8: GLMs and Hierarchical LMs and GLMs Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course

More information

Bayesian Nonparametric Meta-Analysis Model George Karabatsos University of Illinois-Chicago (UIC)

Bayesian Nonparametric Meta-Analysis Model George Karabatsos University of Illinois-Chicago (UIC) Bayesian Nonparametric Meta-Analysis Model George Karabatsos University of Illinois-Chicago (UIC) Collaborators: Elizabeth Talbott, UIC. Stephen Walker, UT-Austin. August 9, 5, 4:5-4:45pm JSM 5 Meeting,

More information

Bayesian Model Diagnostics and Checking

Bayesian Model Diagnostics and Checking Earvin Balderama Quantitative Ecology Lab Department of Forestry and Environmental Resources North Carolina State University April 12, 2013 1 / 34 Introduction MCMCMC 2 / 34 Introduction MCMCMC Steps in

More information

A Workshop on Bayesian Nonparametric Regression Analysis

A Workshop on Bayesian Nonparametric Regression Analysis A Workshop on Bayesian Nonparametric Regression Analysis George Karabatsos University of Illinois-Chicago Methodological Illustration Presentation (90 min.) Modern Modeling Methods (M 3 ) Conference University

More information

36-720: The Rasch Model

36-720: The Rasch Model 36-720: The Rasch Model Brian Junker October 15, 2007 Multivariate Binary Response Data Rasch Model Rasch Marginal Likelihood as a GLMM Rasch Marginal Likelihood as a Log-Linear Model Example For more

More information

Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood

Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood Jonathan Gruhl March 18, 2010 1 Introduction Researchers commonly apply item response theory (IRT) models to binary and ordinal

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Bayesian estimation of the discrepancy with misspecified parametric models

Bayesian estimation of the discrepancy with misspecified parametric models Bayesian estimation of the discrepancy with misspecified parametric models Pierpaolo De Blasi University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics workshop ICERM, 17-21 September 2012

More information

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE Data Provided: None DEPARTMENT OF COMPUTER SCIENCE Autumn Semester 203 204 MACHINE LEARNING AND ADAPTIVE INTELLIGENCE 2 hours Answer THREE of the four questions. All questions carry equal weight. Figures

More information

Bayes methods for categorical data. April 25, 2017

Bayes methods for categorical data. April 25, 2017 Bayes methods for categorical data April 25, 2017 Motivation for joint probability models Increasing interest in high-dimensional data in broad applications Focus may be on prediction, variable selection,

More information

Spatial Bayesian Nonparametrics for Natural Image Segmentation

Spatial Bayesian Nonparametrics for Natural Image Segmentation Spatial Bayesian Nonparametrics for Natural Image Segmentation Erik Sudderth Brown University Joint work with Michael Jordan University of California Soumya Ghosh Brown University Parsing Visual Scenes

More information

A Note on Item Restscore Association in Rasch Models

A Note on Item Restscore Association in Rasch Models Brief Report A Note on Item Restscore Association in Rasch Models Applied Psychological Measurement 35(7) 557 561 ª The Author(s) 2011 Reprints and permission: sagepub.com/journalspermissions.nav DOI:

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

Lesson 7: Item response theory models (part 2)

Lesson 7: Item response theory models (part 2) Lesson 7: Item response theory models (part 2) Patrícia Martinková Department of Statistical Modelling Institute of Computer Science, Czech Academy of Sciences Institute for Research and Development of

More information

Bayesian model selection for computer model validation via mixture model estimation

Bayesian model selection for computer model validation via mixture model estimation Bayesian model selection for computer model validation via mixture model estimation Kaniav Kamary ATER, CNAM Joint work with É. Parent, P. Barbillon, M. Keller and N. Bousquet Outline Computer model validation

More information

Bayesian non-parametric model to longitudinally predict churn

Bayesian non-parametric model to longitudinally predict churn Bayesian non-parametric model to longitudinally predict churn Bruno Scarpa Università di Padova Conference of European Statistics Stakeholders Methodologists, Producers and Users of European Statistics

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

PIRLS 2016 Achievement Scaling Methodology 1

PIRLS 2016 Achievement Scaling Methodology 1 CHAPTER 11 PIRLS 2016 Achievement Scaling Methodology 1 The PIRLS approach to scaling the achievement data, based on item response theory (IRT) scaling with marginal estimation, was developed originally

More information

MCMC: Markov Chain Monte Carlo

MCMC: Markov Chain Monte Carlo I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov

More information

A Fully Nonparametric Modeling Approach to. BNP Binary Regression

A Fully Nonparametric Modeling Approach to. BNP Binary Regression A Fully Nonparametric Modeling Approach to Binary Regression Maria Department of Applied Mathematics and Statistics University of California, Santa Cruz SBIES, April 27-28, 2012 Outline 1 2 3 Simulation

More information

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model

More information

Anders Skrondal. Norwegian Institute of Public Health London School of Hygiene and Tropical Medicine. Based on joint work with Sophia Rabe-Hesketh

Anders Skrondal. Norwegian Institute of Public Health London School of Hygiene and Tropical Medicine. Based on joint work with Sophia Rabe-Hesketh Constructing Latent Variable Models using Composite Links Anders Skrondal Norwegian Institute of Public Health London School of Hygiene and Tropical Medicine Based on joint work with Sophia Rabe-Hesketh

More information

An Introduction to the DA-T Gibbs Sampler for the Two-Parameter Logistic (2PL) Model and Beyond

An Introduction to the DA-T Gibbs Sampler for the Two-Parameter Logistic (2PL) Model and Beyond Psicológica (2005), 26, 327-352 An Introduction to the DA-T Gibbs Sampler for the Two-Parameter Logistic (2PL) Model and Beyond Gunter Maris & Timo M. Bechger Cito (The Netherlands) The DA-T Gibbs sampler

More information

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined

More information

STAT Advanced Bayesian Inference

STAT Advanced Bayesian Inference 1 / 32 STAT 625 - Advanced Bayesian Inference Meng Li Department of Statistics Jan 23, 218 The Dirichlet distribution 2 / 32 θ Dirichlet(a 1,...,a k ) with density p(θ 1,θ 2,...,θ k ) = k j=1 Γ(a j) Γ(

More information

A Sequential Bayesian Approach with Applications to Circadian Rhythm Microarray Gene Expression Data

A Sequential Bayesian Approach with Applications to Circadian Rhythm Microarray Gene Expression Data A Sequential Bayesian Approach with Applications to Circadian Rhythm Microarray Gene Expression Data Faming Liang, Chuanhai Liu, and Naisyin Wang Texas A&M University Multiple Hypothesis Testing Introduction

More information

Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation

Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation Dimitris Rizopoulos Department of Biostatistics, Erasmus University Medical Center, the Netherlands d.rizopoulos@erasmusmc.nl

More information

Fast Likelihood-Free Inference via Bayesian Optimization

Fast Likelihood-Free Inference via Bayesian Optimization Fast Likelihood-Free Inference via Bayesian Optimization Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology

More information

Image segmentation combining Markov Random Fields and Dirichlet Processes

Image segmentation combining Markov Random Fields and Dirichlet Processes Image segmentation combining Markov Random Fields and Dirichlet Processes Jessica SODJO IMS, Groupe Signal Image, Talence Encadrants : A. Giremus, J.-F. Giovannelli, F. Caron, N. Dobigeon Jessica SODJO

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

STA 216, GLM, Lecture 16. October 29, 2007

STA 216, GLM, Lecture 16. October 29, 2007 STA 216, GLM, Lecture 16 October 29, 2007 Efficient Posterior Computation in Factor Models Underlying Normal Models Generalized Latent Trait Models Formulation Genetic Epidemiology Illustration Structural

More information

Machine learning: Hypothesis testing. Anders Hildeman

Machine learning: Hypothesis testing. Anders Hildeman Location of trees 0 Observed trees 50 100 150 200 250 300 350 400 450 500 0 100 200 300 400 500 600 700 800 900 1000 Figur: Observed points pattern of the tree specie Beilschmiedia pendula. Location of

More information

Nominal Data. Parametric Statistics. Nonparametric Statistics. Parametric vs Nonparametric Tests. Greg C Elvers

Nominal Data. Parametric Statistics. Nonparametric Statistics. Parametric vs Nonparametric Tests. Greg C Elvers Nominal Data Greg C Elvers 1 Parametric Statistics The inferential statistics that we have discussed, such as t and ANOVA, are parametric statistics A parametric statistic is a statistic that makes certain

More information

Semiparametric Generalized Linear Models

Semiparametric Generalized Linear Models Semiparametric Generalized Linear Models North American Stata Users Group Meeting Chicago, Illinois Paul Rathouz Department of Health Studies University of Chicago prathouz@uchicago.edu Liping Gao MS Student

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

Bayesian Multivariate Logistic Regression

Bayesian Multivariate Logistic Regression Bayesian Multivariate Logistic Regression Sean M. O Brien and David B. Dunson Biostatistics Branch National Institute of Environmental Health Sciences Research Triangle Park, NC 1 Goals Brief review of

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

A Marginal Maximum Likelihood Procedure for an IRT Model with Single-Peaked Response Functions

A Marginal Maximum Likelihood Procedure for an IRT Model with Single-Peaked Response Functions A Marginal Maximum Likelihood Procedure for an IRT Model with Single-Peaked Response Functions Cees A.W. Glas Oksana B. Korobko University of Twente, the Netherlands OMD Progress Report 07-01. Cees A.W.

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

BAYESIAN DECISION THEORY

BAYESIAN DECISION THEORY Last updated: September 17, 2012 BAYESIAN DECISION THEORY Problems 2 The following problems from the textbook are relevant: 2.1 2.9, 2.11, 2.17 For this week, please at least solve Problem 2.3. We will

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Seminar über Statistik FS2008: Model Selection

Seminar über Statistik FS2008: Model Selection Seminar über Statistik FS2008: Model Selection Alessia Fenaroli, Ghazale Jazayeri Monday, April 2, 2008 Introduction Model Choice deals with the comparison of models and the selection of a model. It can

More information

Contents. Part I: Fundamentals of Bayesian Inference 1

Contents. Part I: Fundamentals of Bayesian Inference 1 Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian

More information

Item Parameter Calibration of LSAT Items Using MCMC Approximation of Bayes Posterior Distributions

Item Parameter Calibration of LSAT Items Using MCMC Approximation of Bayes Posterior Distributions R U T C O R R E S E A R C H R E P O R T Item Parameter Calibration of LSAT Items Using MCMC Approximation of Bayes Posterior Distributions Douglas H. Jones a Mikhail Nediak b RRR 7-2, February, 2! " ##$%#&

More information

Item Response Theory (IRT) Analysis of Item Sets

Item Response Theory (IRT) Analysis of Item Sets University of Connecticut DigitalCommons@UConn NERA Conference Proceedings 2011 Northeastern Educational Research Association (NERA) Annual Conference Fall 10-21-2011 Item Response Theory (IRT) Analysis

More information

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier

More information

Contents. 3 Evaluating Manifest Monotonicity Using Bayes Factors Introduction... 44

Contents. 3 Evaluating Manifest Monotonicity Using Bayes Factors Introduction... 44 Contents 1 Introduction 4 1.1 Measuring Latent Attributes................. 4 1.2 Assumptions in Item Response Theory............ 6 1.2.1 Local Independence.................. 6 1.2.2 Unidimensionality...................

More information

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides Probabilistic modeling The slides are closely adapted from Subhransu Maji s slides Overview So far the models and algorithms you have learned about are relatively disconnected Probabilistic modeling framework

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

Bayesian nonparametric predictive approaches for causal inference: Regression Discontinuity Methods

Bayesian nonparametric predictive approaches for causal inference: Regression Discontinuity Methods Bayesian nonparametric predictive approaches for causal inference: Regression Discontinuity Methods George Karabatsos University of Illinois-Chicago ERCIM Conference, 14-16 December, 2013 Senate House,

More information

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3 University of California, Irvine 2017-2018 1 Statistics (STATS) Courses STATS 5. Seminar in Data Science. 1 Unit. An introduction to the field of Data Science; intended for entering freshman and transfers.

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

An Equivalency Test for Model Fit. Craig S. Wells. University of Massachusetts Amherst. James. A. Wollack. Ronald C. Serlin

An Equivalency Test for Model Fit. Craig S. Wells. University of Massachusetts Amherst. James. A. Wollack. Ronald C. Serlin Equivalency Test for Model Fit 1 Running head: EQUIVALENCY TEST FOR MODEL FIT An Equivalency Test for Model Fit Craig S. Wells University of Massachusetts Amherst James. A. Wollack Ronald C. Serlin University

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017

Bayesian Statistics. Debdeep Pati Florida State University. April 3, 2017 Bayesian Statistics Debdeep Pati Florida State University April 3, 2017 Finite mixture model The finite mixture of normals can be equivalently expressed as y i N(µ Si ; τ 1 S i ), S i k π h δ h h=1 δ h

More information

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features Yangxin Huang Department of Epidemiology and Biostatistics, COPH, USF, Tampa, FL yhuang@health.usf.edu January

More information

Quantifying the Price of Uncertainty in Bayesian Models

Quantifying the Price of Uncertainty in Bayesian Models Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published version when available. Title Quantifying the Price of Uncertainty in Bayesian Models Author(s)

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

ANALYTIC COMPARISON. Pearl and Rubin CAUSAL FRAMEWORKS

ANALYTIC COMPARISON. Pearl and Rubin CAUSAL FRAMEWORKS ANALYTIC COMPARISON of Pearl and Rubin CAUSAL FRAMEWORKS Content Page Part I. General Considerations Chapter 1. What is the question? 16 Introduction 16 1. Randomization 17 1.1 An Example of Randomization

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

Machine Learning Overview

Machine Learning Overview Machine Learning Overview Sargur N. Srihari University at Buffalo, State University of New York USA 1 Outline 1. What is Machine Learning (ML)? 2. Types of Information Processing Problems Solved 1. Regression

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

2 Bayesian Hierarchical Response Modeling

2 Bayesian Hierarchical Response Modeling 2 Bayesian Hierarchical Response Modeling In the first chapter, an introduction to Bayesian item response modeling was given. The Bayesian methodology requires careful specification of priors since item

More information

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE Donald A. Pierce Oregon State Univ (Emeritus), RERF Hiroshima (Retired), Oregon Health Sciences Univ (Adjunct) Ruggero Bellio Univ of Udine For Perugia

More information

39th Annual ISMS Marketing Science Conference University of Southern California, June 8, 2017

39th Annual ISMS Marketing Science Conference University of Southern California, June 8, 2017 Permuted and IROM Department, McCombs School of Business The University of Texas at Austin 39th Annual ISMS Marketing Science Conference University of Southern California, June 8, 2017 1 / 36 Joint work

More information

The Rasch Model as Additive Conjoint Measurement

The Rasch Model as Additive Conjoint Measurement The Rasch Model as Additive Conjoint Measurement Richard Perline University of Chicago Benjamin D. Wright University of Chicago Howard Wainer Bureau of Social Science Research The object of this paper

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Center for Advanced Studies in Measurement and Assessment. CASMA Research Report

Center for Advanced Studies in Measurement and Assessment. CASMA Research Report Center for Advanced Studies in Measurement and Assessment CASMA Research Report Number 24 in Relation to Measurement Error for Mixed Format Tests Jae-Chun Ban Won-Chan Lee February 2007 The authors are

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain

More information

Bayesian density regression for count data

Bayesian density regression for count data Bayesian density regression for count data Charalampos Chanialidis 1, Ludger Evers 2, and Tereza Neocleous 3 arxiv:1406.1882v1 [stat.me] 7 Jun 2014 1 University of Glasgow, c.chanialidis1@research.gla.ac.uk

More information

Stochastic Processes, Kernel Regression, Infinite Mixture Models

Stochastic Processes, Kernel Regression, Infinite Mixture Models Stochastic Processes, Kernel Regression, Infinite Mixture Models Gabriel Huang (TA for Simon Lacoste-Julien) IFT 6269 : Probabilistic Graphical Models - Fall 2018 Stochastic Process = Random Function 2

More information

BAYESIAN MODEL CHECKING STRATEGIES FOR DICHOTOMOUS ITEM RESPONSE THEORY MODELS. Sherwin G. Toribio. A Dissertation

BAYESIAN MODEL CHECKING STRATEGIES FOR DICHOTOMOUS ITEM RESPONSE THEORY MODELS. Sherwin G. Toribio. A Dissertation BAYESIAN MODEL CHECKING STRATEGIES FOR DICHOTOMOUS ITEM RESPONSE THEORY MODELS Sherwin G. Toribio A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

Curve Fitting Re-visited, Bishop1.2.5

Curve Fitting Re-visited, Bishop1.2.5 Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Bayesian Analysis of Risk for Data Mining Based on Empirical Likelihood

Bayesian Analysis of Risk for Data Mining Based on Empirical Likelihood 1 / 29 Bayesian Analysis of Risk for Data Mining Based on Empirical Likelihood Yuan Liao Wenxin Jiang Northwestern University Presented at: Department of Statistics and Biostatistics Rutgers University

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Bayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples

Bayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples Bayesian inference for sample surveys Roderick Little Module : Bayesian models for simple random samples Superpopulation Modeling: Estimating parameters Various principles: least squares, method of moments,

More information

Bayesian Nonparametric Regression for Diabetes Deaths

Bayesian Nonparametric Regression for Diabetes Deaths Bayesian Nonparametric Regression for Diabetes Deaths Brian M. Hartman PhD Student, 2010 Texas A&M University College Station, TX, USA David B. Dahl Assistant Professor Texas A&M University College Station,

More information