Measuring nuisance parameter effects in Bayesian inference

Size: px
Start display at page:

Download "Measuring nuisance parameter effects in Bayesian inference"

Transcription

1 Measuring nuisance parameter effects in Bayesian inference Alastair Young Imperial College London WHOA-PSI / 31

2 Acknowledgements: Tom DiCiccio, Cornell University; Daniel Garcia Rasines, Imperial College London; Todd Kuffner, WUSTL. 2 / 31

3 Background Inferences drawn from Bayesian data analysis depend on the assumed prior distribution. 3 / 31

4 Background Inferences drawn from Bayesian data analysis depend on the assumed prior distribution. Substantial Bayesian robustness literature, quantifying degree to which posterior quantities depend on prior. 3 / 31

5 Background Inferences drawn from Bayesian data analysis depend on the assumed prior distribution. Substantial Bayesian robustness literature, quantifying degree to which posterior quantities depend on prior. Especially important with hierarchical models, interpretability deeper in hierarchy becomes challenging. 3 / 31

6 Approaches Global: all possible inferences arising from class of priors considered, hope resulting class of inferences is small, so robust to prior. Limited by difficulty of computing range of posterior quantity as prior varies. 4 / 31

7 Approaches Global: all possible inferences arising from class of priors considered, hope resulting class of inferences is small, so robust to prior. Limited by difficulty of computing range of posterior quantity as prior varies. Local: based on infinitesimal perturbations to prior, differentiation of posterior quantities wrt prior. 4 / 31

8 Typical inferential problem Let Y = {Y 1,..., Y n } be random sample from underlying distribution F θ, indexed by d + 1-dimensional parameter θ = (θ 1,..., θ d+1 ) = (ψ, χ). Have ψ = θ 1 a scalar parameter of interest, χ a d-dimensional nuisance parameter. 5 / 31

9 Key question How sensitive is the (marginal) posterior inference on ψ to presence in model of nuisance parameter χ and assumed form of prior distribution? 6 / 31

10 Purpose Motivated by corresponding frequentist ideas, provide machinery, based on Bayesian asymptotics, to measure extent of the influence that nuisance parameters have on the inference. 7 / 31

11 Purpose Motivated by corresponding frequentist ideas, provide machinery, based on Bayesian asymptotics, to measure extent of the influence that nuisance parameters have on the inference. Offer direct, practical interpretation of measures of influence in terms of posterior probabilities (in full and reduced models). 7 / 31

12 Key (frequentist) statistic For testing H 0 : ψ = ψ 0 against a one-sided alternative the signed root likelihood ratio statistic is R R(ψ 0 ) = sgn( ˆψ ψ 0 ) [ 2 { l(ˆθ) l(ψ 0, ˆχ 0 ) }] 1/2, where l(θ) = l(ψ, χ) is the log-likelihood function, ˆθ = ( ˆψ, ˆχ) is the MLE of θ, and ˆχ 0 is the constrained MLE of χ given the value ψ = ψ 0 of the interest parameter ψ. 8 / 31

13 Bayesian posterior approximation: full model For continuous prior density π(θ) = π(ψ, χ), the posterior tail probability given the data Y = (Y 1,..., Y n ) is P(ψ ψ 0 Y ) = Φ(R ) + O(n 3/2 ), where Φ( ) is the cumulative distribution function of the standard normal distribution, R R (ψ 0 ) = R + R 1 log ( T /R ), T T (ψ 0 ) = l ψ (ψ 0, ˆχ 0 ) l χχ(ψ 0, ˆχ 0 ) 1/2 l θθ (ˆθ) 1/2 π(ˆθ) π(ψ 0, ˆχ 0 ), and ˆψ ψ 0 is of order O(n 1/2 ). 9 / 31

14 A factorization We may factorize with T = T INF T NP, T INF T INF (ψ 0 ) = πψ ( ˆψ) π ψ (ψ 0 ) l ψ(ψ 0, ˆχ 0 ) { l ψψ (ˆθ) } 1/2, T NP T NP (ψ 0 ) = l χχ(ψ 0, ˆχ 0 ) 1/2 l χχ (ˆθ) 1/2 π χ ψ (ˆθ) π χ ψ (ψ 0, ˆχ 0 ), with π ψ (ψ) the marginal prior density for ψ and π χ ψ (θ) the conditional prior density for χ given ψ. 10 / 31

15 Properties For Bayesian inference, R = R + Z NP + Z INF, where Z NP = R 1 log T NP and Z INF = R 1 log ( T INF /R ). 11 / 31

16 Properties For Bayesian inference, R = R + Z NP + Z INF, where Z NP = R 1 log T NP and Z INF = R 1 log ( T INF /R ). This decomposition has two important properties: 11 / 31

17 Properties For Bayesian inference, R = R + Z NP + Z INF, where Z NP = R 1 log T NP and Z INF = R 1 log ( T INF /R ). This decomposition has two important properties: Both T INF and T NP are invariant under interest-respecting transformations, hence Z NP and Z INF also have this property. 11 / 31

18 Properties For Bayesian inference, R = R + Z NP + Z INF, where Z NP = R 1 log T NP and Z INF = R 1 log ( T INF /R ). This decomposition has two important properties: Both T INF and T NP are invariant under interest-respecting transformations, hence Z NP and Z INF also have this property. T NP does not appear in the tail probability formula when nuisance parameters are absent, so Z NP also vanishes in this case. 11 / 31

19 Relationship with posterior mean The posterior mean of R satisfies where E{R(ψ) Y } = { g NP + g INF } + O(n 1 ), g NP = lim Z NP (ψ 0 ), ψ 0 ˆψ g INF = lim Z INF (ψ 0 ). ψ 0 ˆψ Standard calculations show that Z NP (ψ 0 ) = g NP + O(n 1 ) and Z INF (ψ 0 ) = g INF + O(n 1 ), for ψ 0 in O(n 1/2 ) neighborhood of ˆψ. 12 / 31

20 Key observation Z INF agrees to error of order O(n 1 ) with the value it would take in the reduced problem where the nuisance parameter χ is set equal to the maximum likelihood estimate ˆχ and Bayesian inference is considered only for the scalar parameter ψ based on the prior density π ψ (ψ). 13 / 31

21 Key observation Z INF agrees to error of order O(n 1 ) with the value it would take in the reduced problem where the nuisance parameter χ is set equal to the maximum likelihood estimate ˆχ and Bayesian inference is considered only for the scalar parameter ψ based on the prior density π ψ (ψ). It is apparent that Z INF, at least to error of order O(n 1 ), does not account for the presence of nuisance parameters. 13 / 31

22 Key observation Z INF agrees to error of order O(n 1 ) with the value it would take in the reduced problem where the nuisance parameter χ is set equal to the maximum likelihood estimate ˆχ and Bayesian inference is considered only for the scalar parameter ψ based on the prior density π ψ (ψ). It is apparent that Z INF, at least to error of order O(n 1 ), does not account for the presence of nuisance parameters. Nuisance parameters are accounted for by Z NP. 13 / 31

23 Proposal Candidate measures to assess the extent of the influence that the nuisance parameters have on the Bayesian inference might be the ratios g NP /g INF and Z NP /Z INF. 14 / 31

24 Proposal Candidate measures to assess the extent of the influence that the nuisance parameters have on the Bayesian inference might be the ratios g NP /g INF and Z NP /Z INF. We offer a practical interpretation of this ratio in terms of posterior probabilities. 14 / 31

25 Proposal Candidate measures to assess the extent of the influence that the nuisance parameters have on the Bayesian inference might be the ratios g NP /g INF and Z NP /Z INF. We offer a practical interpretation of this ratio in terms of posterior probabilities. Asymptotics guiding the relevant finite-sample quantity to examine. 14 / 31

26 Details Let the reference value ˆψ 1 α be the asymptotic 1 α posterior percentage point of ψ, given by R( ˆψ 1 α ) = z α, where z α is the α-quantile of the standard normal distribution. 15 / 31

27 The relative difference in the percentile bias of the asymptotic percentage point ˆψ 1 α between the full and the reduced problems is { P ( ψ ˆψ1 α Y ) (1 α) } { P Reduced ( ψ ˆψ1 α Y ) (1 α) } P Reduced ( ψ ˆψ1 α Y ) (1 α) = exp{ z 1 α Z NP ( ˆψ 1 α ) } 1 1 exp { z 1 α Z INF ( ˆψ 1 α ) }, to error of order O(n 1 ). 16 / 31

28 The relative difference in the percentile bias of the asymptotic percentage point ˆψ 1 α between the full and the reduced problems is { P ( ψ ˆψ1 α Y ) (1 α) } { P Reduced ( ψ ˆψ1 α Y ) (1 α) } P Reduced ( ψ ˆψ1 α Y ) (1 α) = exp{ z 1 α Z NP ( ˆψ 1 α ) } 1 1 exp { z 1 α Z INF ( ˆψ 1 α ) }, to error of order O(n 1 ). In examples, such as high-dimensional linear regression, found to be highly accurate. 16 / 31

29 Simplifications To error of order O(n 1 ) we have exp { z 1 α Z NP ( ˆψ 1 α ) } 1 1 exp { z 1 α Z INF ( ˆψ 1 α ) } = Z NP( ˆψ 1 α ) Z INF ( ˆψ 1 α ) = g NP g INF. 17 / 31

30 Hierarchical specification The prior distribution of the parameter (ψ, χ) that determines the distribution of Y may itself depend on a hyperparameter β which is given a prior distribution. 18 / 31

31 Hierarchical specification The prior distribution of the parameter (ψ, χ) that determines the distribution of Y may itself depend on a hyperparameter β which is given a prior distribution. The entire prior density is of the form π(ψ, χ, β) = π ψ,χ β (ψ, χ, β)π β (β), while the log-likelihood function depends only on (ψ, χ). 18 / 31

32 Hierarchical specification The prior distribution of the parameter (ψ, χ) that determines the distribution of Y may itself depend on a hyperparameter β which is given a prior distribution. The entire prior density is of the form π(ψ, χ, β) = π ψ,χ β (ψ, χ, β)π β (β), while the log-likelihood function depends only on (ψ, χ). A factorization T = T INF T NP can be obtained as before by first integrating the complete prior density π(ψ, χ, β) with respect to β to obtain the marginal density π ψ,χ (ψ, χ) = π(ψ, χ, β)dβ. 18 / 31

33 Example: Poisson distribution Y 1,..., Y d+1 are independent, where Y i has the Poisson distribution with mean λ i t i and t i is known, i = 1,..., d + 1. Thus, the model parameter is λ = (λ 1,..., λ d+1 ); suppose the scalar parameter of interest is a component of λ, say ψ = λ 1, so the nuisance parameter is χ = (λ 2,..., λ d+1 ). 19 / 31

34 Priors The prior distribution involves a scalar hyperparameter β: the model parameters λ 1,..., λ d+1 are assumed to be a sample from the gamma distribution with shape parameter ω 0 and scale parameter β, and β is assumed to have the inverse gamma distribution with shape parameter γ 0 and scale parameter δ 0 ; thus, π λ1,...,λd+1 β 1 ( d+1 (λ 1,..., λ d+1, β) = {Γ(ω 0 )β ω0 } d+1 i=1 ) ω0 1 d+1 λ i exp ( i=1 ) λ i, β and π β (β) = 1 Γ(γ 0 )δ γ0 0 ( β (γ0+1) exp 1 ). βδ 0 20 / 31

35 Data example Here, λ i failure rates pumps nuclear power plant. Have 10 observed Poisson variables, so d = 9; the observed values are i Y i t i / 31

36 Data example Here, λ i failure rates pumps nuclear power plant. Have 10 observed Poisson variables, so d = 9; the observed values are i Y i t i Fix priors parameter values as (ω 0, γ 0, δ 0 ) = (1.802, 0.01, 1), as suggested in previous analyses. 21 / 31

37 Values of g NP, g INF, and the ratio g NP /g INF for different interest parameters: λ 1 λ 3 λ 5 λ 7 λ 9 g INF g NP g NP /g INF / 31

38 Dependence on prior Investigate the susceptibility of the ratio g NP /g INF to the choice of (ω 0, γ 0, δ 0 ), at least locally, by considering the derivative of the ratio with respect to the components. Table gives derivatives at (ω 0, γ 0, δ 0 ) = (1.802, 0.01, 1): wrt λ 1 λ 3 λ 5 λ 7 λ 9 ω γ δ / 31

39 Dependence on prior Investigate the susceptibility of the ratio g NP /g INF to the choice of (ω 0, γ 0, δ 0 ), at least locally, by considering the derivative of the ratio with respect to the components. Table gives derivatives at (ω 0, γ 0, δ 0 ) = (1.802, 0.01, 1): wrt λ 1 λ 3 λ 5 λ 7 λ 9 ω γ δ Influence of the nuisance parameters is relatively unaffected by the choice of ω 0, γ 0 and δ 0 for λ 1,..., λ 6, moderately affected for λ 7 and λ 8, significantly affected for λ 9 and λ / 31

40 Comparisons, λ 1 Posteriors, λ 1 Tail probability Full posterior Reduced posterior Asymptotic ψ 24 / 31

41 Comparisons, λ 5 Posteriors, λ 5 Tail probability Full posterior Reduced posterior Asymptotic ψ 25 / 31

42 Comparisons, λ 7 Posteriors, λ 7 Tail probability Full posterior Reduced posterior Asymptotic ψ 26 / 31

43 Varying δ, interest parameterλ 1 Posterior distributions for λ 1 for varying δ Tail probability δ =1.0 δ = λ 1 27 / 31

44 Varying δ, interest parameterλ 9 Posterior distributions for λ 9 for varying δ Tail probability δ =1.0 δ = λ 9 28 / 31

45 Summary/Conclusions 29 / 31

46 Summary/Conclusions Decomposition of Bayesian version of adjusted signed root likelihood ratio statistic provides a simple computational machinery for analysis of effects of prior assumptions on nuisance parameters on a marginal inference on an interest parameter. 29 / 31

47 Summary/Conclusions Decomposition of Bayesian version of adjusted signed root likelihood ratio statistic provides a simple computational machinery for analysis of effects of prior assumptions on nuisance parameters on a marginal inference on an interest parameter. Measures have direct interpretation in terms of posterior probabilities. 29 / 31

48 Provide practically useful, readily computed means of assessing nuisance parameter effects in Bayesian analysis. 30 / 31

49 Provide practically useful, readily computed means of assessing nuisance parameter effects in Bayesian analysis. Calibration: what constitutes a large ratio? Best considered by comparing measures for different priors. Calculation across range of priors allows a straightforward approach to sensitivity analysis/ evaluation of robustness. 30 / 31

50 Provide practically useful, readily computed means of assessing nuisance parameter effects in Bayesian analysis. Calibration: what constitutes a large ratio? Best considered by comparing measures for different priors. Calculation across range of priors allows a straightforward approach to sensitivity analysis/ evaluation of robustness. Permits identification of what components of prior specification have substantial effect on posterior marginal inference of interest. Especially valuable within hierarchical framework. 30 / 31

51 Some relevant references Bayesian asymptotics: DiCiccio & Martin (1993), JRSSB, 55, DiCiccio & Young (2010), Biometrika, 97, DiCiccio, Kuffner & Young (2012), Biometrika, 99, Frequentist asymptotics: DiCiccio, Kuffner & Young (2015), JSPI, 165, DiCiccio, Kuffner, Young & Zaretzki (2015), Stat. Sinica, 25, DiCiccio, Kuffner & Young (2017), JSPI, to appear. 31 / 31

The formal relationship between analytic and bootstrap approaches to parametric inference

The formal relationship between analytic and bootstrap approaches to parametric inference The formal relationship between analytic and bootstrap approaches to parametric inference T.J. DiCiccio Cornell University, Ithaca, NY 14853, U.S.A. T.A. Kuffner Washington University in St. Louis, St.

More information

Default priors and model parametrization

Default priors and model parametrization 1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)

More information

Bootstrap and Parametric Inference: Successes and Challenges

Bootstrap and Parametric Inference: Successes and Challenges Bootstrap and Parametric Inference: Successes and Challenges G. Alastair Young Department of Mathematics Imperial College London Newton Institute, January 2008 Overview Overview Review key aspects of frequentist

More information

Likelihood inference in the presence of nuisance parameters

Likelihood inference in the presence of nuisance parameters Likelihood inference in the presence of nuisance parameters Nancy Reid, University of Toronto www.utstat.utoronto.ca/reid/research 1. Notation, Fisher information, orthogonal parameters 2. Likelihood inference

More information

Bayesian and frequentist inference

Bayesian and frequentist inference Bayesian and frequentist inference Nancy Reid March 26, 2007 Don Fraser, Ana-Maria Staicu Overview Methods of inference Asymptotic theory Approximate posteriors matching priors Examples Logistic regression

More information

DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY

DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY Journal of Statistical Research 200x, Vol. xx, No. xx, pp. xx-xx ISSN 0256-422 X DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY D. A. S. FRASER Department of Statistical Sciences,

More information

Applied Asymptotics Case studies in higher order inference

Applied Asymptotics Case studies in higher order inference Applied Asymptotics Case studies in higher order inference Nancy Reid May 18, 2006 A.C. Davison, A. R. Brazzale, A. M. Staicu Introduction likelihood-based inference in parametric models higher order approximations

More information

Last week. posterior marginal density. exact conditional density. LTCC Likelihood Theory Week 3 November 19, /36

Last week. posterior marginal density. exact conditional density. LTCC Likelihood Theory Week 3 November 19, /36 Last week Nuisance parameters f (y; ψ, λ), l(ψ, λ) posterior marginal density π m (ψ) =. c (2π) q el P(ψ) l P ( ˆψ) j P ( ˆψ) 1/2 π(ψ, ˆλ ψ ) j λλ ( ˆψ, ˆλ) 1/2 π( ˆψ, ˆλ) j λλ (ψ, ˆλ ψ ) 1/2 l p (ψ) =

More information

The Surprising Conditional Adventures of the Bootstrap

The Surprising Conditional Adventures of the Bootstrap The Surprising Conditional Adventures of the Bootstrap G. Alastair Young Department of Mathematics Imperial College London Inaugural Lecture, 13 March 2006 Acknowledgements Early influences: Eric Renshaw,

More information

Principles of Statistical Inference

Principles of Statistical Inference Principles of Statistical Inference Nancy Reid and David Cox August 30, 2013 Introduction Statistics needs a healthy interplay between theory and applications theory meaning Foundations, rather than theoretical

More information

Principles of Statistical Inference

Principles of Statistical Inference Principles of Statistical Inference Nancy Reid and David Cox August 30, 2013 Introduction Statistics needs a healthy interplay between theory and applications theory meaning Foundations, rather than theoretical

More information

ASSESSING A VECTOR PARAMETER

ASSESSING A VECTOR PARAMETER SUMMARY ASSESSING A VECTOR PARAMETER By D.A.S. Fraser and N. Reid Department of Statistics, University of Toronto St. George Street, Toronto, Canada M5S 3G3 dfraser@utstat.toronto.edu Some key words. Ancillary;

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Conditional Inference by Estimation of a Marginal Distribution

Conditional Inference by Estimation of a Marginal Distribution Conditional Inference by Estimation of a Marginal Distribution Thomas J. DiCiccio and G. Alastair Young 1 Introduction Conditional inference has been, since the seminal work of Fisher (1934), a fundamental

More information

Marginal Posterior Simulation via Higher-order Tail Area Approximations

Marginal Posterior Simulation via Higher-order Tail Area Approximations Bayesian Analysis (2014) 9, Number 1, pp. 129 146 Marginal Posterior Simulation via Higher-order Tail Area Approximations Erlis Ruli, Nicola Sartori and Laura Ventura Abstract. A new method for posterior

More information

A simple analysis of the exact probability matching prior in the location-scale model

A simple analysis of the exact probability matching prior in the location-scale model A simple analysis of the exact probability matching prior in the location-scale model Thomas J. DiCiccio Department of Social Statistics, Cornell University Todd A. Kuffner Department of Mathematics, Washington

More information

Likelihood Inference in the Presence of Nuisance Parameters

Likelihood Inference in the Presence of Nuisance Parameters PHYSTAT2003, SLAC, September 8-11, 2003 1 Likelihood Inference in the Presence of Nuance Parameters N. Reid, D.A.S. Fraser Department of Stattics, University of Toronto, Toronto Canada M5S 3G3 We describe

More information

Nuisance parameters and their treatment

Nuisance parameters and their treatment BS2 Statistical Inference, Lecture 2, Hilary Term 2008 April 2, 2008 Ancillarity Inference principles Completeness A statistic A = a(x ) is said to be ancillary if (i) The distribution of A does not depend

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Likelihood and p-value functions in the composite likelihood context

Likelihood and p-value functions in the composite likelihood context Likelihood and p-value functions in the composite likelihood context D.A.S. Fraser and N. Reid Department of Statistical Sciences University of Toronto November 19, 2016 Abstract The need for combining

More information

Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors.

Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors. Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors. Early version, also known as pre-print Link to publication record in Explore Bristol Research PDF-document University

More information

Approximate Inference for the Multinomial Logit Model

Approximate Inference for the Multinomial Logit Model Approximate Inference for the Multinomial Logit Model M.Rekkas Abstract Higher order asymptotic theory is used to derive p-values that achieve superior accuracy compared to the p-values obtained from traditional

More information

Accurate directional inference for vector parameters

Accurate directional inference for vector parameters Accurate directional inference for vector parameters Nancy Reid October 28, 2016 with Don Fraser, Nicola Sartori, Anthony Davison Parametric models and likelihood model f (y; θ), θ R p data y = (y 1,...,

More information

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist

More information

Australian & New Zealand Journal of Statistics

Australian & New Zealand Journal of Statistics Australian & New Zealand Journal of Statistics Aust.N.Z.J.Stat.51(2), 2009, 115 126 doi: 10.1111/j.1467-842X.2009.00548.x ROUTES TO HIGHER-ORDER ACCURACY IN PARAMETRIC INFERENCE G. ALASTAIR YOUNG 1 Imperial

More information

Improved Inference for First Order Autocorrelation using Likelihood Analysis

Improved Inference for First Order Autocorrelation using Likelihood Analysis Improved Inference for First Order Autocorrelation using Likelihood Analysis M. Rekkas Y. Sun A. Wong Abstract Testing for first-order autocorrelation in small samples using the standard asymptotic test

More information

Likelihood Inference in the Presence of Nuisance Parameters

Likelihood Inference in the Presence of Nuisance Parameters Likelihood Inference in the Presence of Nuance Parameters N Reid, DAS Fraser Department of Stattics, University of Toronto, Toronto Canada M5S 3G3 We describe some recent approaches to likelihood based

More information

Accurate directional inference for vector parameters

Accurate directional inference for vector parameters Accurate directional inference for vector parameters Nancy Reid February 26, 2016 with Don Fraser, Nicola Sartori, Anthony Davison Nancy Reid Accurate directional inference for vector parameters York University

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

More on nuisance parameters

More on nuisance parameters BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution

More information

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6.

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6. Approximating models Nancy Reid, University of Toronto Oxford, February 6 www.utstat.utoronto.reid/research 1 1. Context Likelihood based inference model f(y; θ), log likelihood function l(θ; y) y = (y

More information

ASYMPTOTICS AND THE THEORY OF INFERENCE

ASYMPTOTICS AND THE THEORY OF INFERENCE ASYMPTOTICS AND THE THEORY OF INFERENCE N. Reid University of Toronto Abstract Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution

More information

Topic 12 Overview of Estimation

Topic 12 Overview of Estimation Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the

More information

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 14 GEE-GMM Throughout the course we have emphasized methods of estimation and inference based on the principle

More information

Module 22: Bayesian Methods Lecture 9 A: Default prior selection

Module 22: Bayesian Methods Lecture 9 A: Default prior selection Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical

More information

Nancy Reid SS 6002A Office Hours by appointment

Nancy Reid SS 6002A Office Hours by appointment Nancy Reid SS 6002A reid@utstat.utoronto.ca Office Hours by appointment Light touch assessment One or two problems assigned weekly graded during Reading Week http://www.utstat.toronto.edu/reid/4508s14.html

More information

Statistical Tools and Techniques for Solar Astronomers

Statistical Tools and Techniques for Solar Astronomers Statistical Tools and Techniques for Solar Astronomers Alexander W Blocker Nathan Stein SolarStat 2012 Outline Outline 1 Introduction & Objectives 2 Statistical issues with astronomical data 3 Example:

More information

Nancy Reid SS 6002A Office Hours by appointment

Nancy Reid SS 6002A Office Hours by appointment Nancy Reid SS 6002A reid@utstat.utoronto.ca Office Hours by appointment Problems assigned weekly, due the following week http://www.utstat.toronto.edu/reid/4508s16.html Various types of likelihood 1. likelihood,

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

Marginal posterior simulation via higher-order tail area approximations

Marginal posterior simulation via higher-order tail area approximations Bayesian Analysis (2004) 1, Number 1, pp. 1 13 Marginal posterior simulation via higher-order tail area approximations Erlis Ruli, Nicola Sartori and Laura Ventura Abstract. A new method for posterior

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Default priors for Bayesian and frequentist inference

Default priors for Bayesian and frequentist inference Default priors for Bayesian and frequentist inference D.A.S. Fraser and N. Reid University of Toronto, Canada E. Marras Centre for Advanced Studies and Development, Sardinia University of Rome La Sapienza,

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Modern likelihood inference for the parameter of skewness: An application to monozygotic

Modern likelihood inference for the parameter of skewness: An application to monozygotic Working Paper Series, N. 10, December 2013 Modern likelihood inference for the parameter of skewness: An application to monozygotic twin studies Mameli Valentina Department of Mathematics and Computer

More information

Bayesian Asymptotics

Bayesian Asymptotics BS2 Statistical Inference, Lecture 8, Hilary Term 2008 May 7, 2008 The univariate case The multivariate case For large λ we have the approximation I = b a e λg(y) h(y) dy = e λg(y ) h(y ) 2π λg (y ) {

More information

An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models

An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models Pierre Nguimkeu Georgia State University Abstract This paper proposes an improved likelihood-based method

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS

ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, N.C.,

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

g-priors for Linear Regression

g-priors for Linear Regression Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Noninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions

Noninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions Communications for Statistical Applications and Methods 03, Vol. 0, No. 5, 387 394 DOI: http://dx.doi.org/0.535/csam.03.0.5.387 Noninformative Priors for the Ratio of the Scale Parameters in the Inverted

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data Song Xi CHEN Guanghua School of Management and Center for Statistical Science, Peking University Department

More information

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j Standard Errors & Confidence Intervals β β asy N(0, I( β) 1 ), where I( β) = [ 2 l(β, φ; y) ] β i β β= β j We can obtain asymptotic 100(1 α)% confidence intervals for β j using: β j ± Z 1 α/2 se( β j )

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Statistical Models with Uncertain Error Parameters (G. Cowan, arxiv: )

Statistical Models with Uncertain Error Parameters (G. Cowan, arxiv: ) Statistical Models with Uncertain Error Parameters (G. Cowan, arxiv:1809.05778) Workshop on Advanced Statistics for Physics Discovery aspd.stat.unipd.it Department of Statistical Sciences, University of

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

The Minimum Message Length Principle for Inductive Inference

The Minimum Message Length Principle for Inductive Inference The Principle for Inductive Inference Centre for Molecular, Environmental, Genetic & Analytic (MEGA) Epidemiology School of Population Health University of Melbourne University of Helsinki, August 25,

More information

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Theory of Maximum Likelihood Estimation. Konstantin Kashin Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Principles of Statistical Inference Recap of statistical models Statistical inference (frequentist) Parametric vs. semiparametric

More information

Some Curiosities Arising in Objective Bayesian Analysis

Some Curiosities Arising in Objective Bayesian Analysis . Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15, 2009 1 Three vignettes related to John s work

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

Bayesian inference for multivariate extreme value distributions

Bayesian inference for multivariate extreme value distributions Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of

More information

Likelihood and Fairness in Multidimensional Item Response Theory

Likelihood and Fairness in Multidimensional Item Response Theory Likelihood and Fairness in Multidimensional Item Response Theory or What I Thought About On My Holidays Giles Hooker and Matthew Finkelman Cornell University, February 27, 2008 Item Response Theory Educational

More information

Integrated Objective Bayesian Estimation and Hypothesis Testing

Integrated Objective Bayesian Estimation and Hypothesis Testing Integrated Objective Bayesian Estimation and Hypothesis Testing José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es 9th Valencia International Meeting on Bayesian Statistics Benidorm

More information

Improved Inference for Moving Average Disturbances in Nonlinear Regression Models

Improved Inference for Moving Average Disturbances in Nonlinear Regression Models Improved Inference for Moving Average Disturbances in Nonlinear Regression Models Pierre Nguimkeu Georgia State University November 22, 2013 Abstract This paper proposes an improved likelihood-based method

More information

Lecture 2: Statistical Decision Theory (Part I)

Lecture 2: Statistical Decision Theory (Part I) Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

The comparative studies on reliability for Rayleigh models

The comparative studies on reliability for Rayleigh models Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

CONVERTING OBSERVED LIKELIHOOD FUNCTIONS TO TAIL PROBABILITIES. D.A.S. Fraser Mathematics Department York University North York, Ontario M3J 1P3

CONVERTING OBSERVED LIKELIHOOD FUNCTIONS TO TAIL PROBABILITIES. D.A.S. Fraser Mathematics Department York University North York, Ontario M3J 1P3 CONVERTING OBSERVED LIKELIHOOD FUNCTIONS TO TAIL PROBABILITIES D.A.S. Fraser Mathematics Department York University North York, Ontario M3J 1P3 N. Reid Department of Statistics University of Toronto Toronto,

More information

Bayesian Inference in Astronomy & Astrophysics A Short Course

Bayesian Inference in Astronomy & Astrophysics A Short Course Bayesian Inference in Astronomy & Astrophysics A Short Course Tom Loredo Dept. of Astronomy, Cornell University p.1/37 Five Lectures Overview of Bayesian Inference From Gaussians to Periodograms Learning

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

Interval Estimation for the Ratio and Difference of Two Lognormal Means

Interval Estimation for the Ratio and Difference of Two Lognormal Means UW Biostatistics Working Paper Series 12-7-2005 Interval Estimation for the Ratio and Difference of Two Lognormal Means Yea-Hung Chen University of Washington, yeahung@u.washington.edu Xiao-Hua Zhou University

More information

Likelihood Inference for Lattice Spatial Processes

Likelihood Inference for Lattice Spatial Processes Likelihood Inference for Lattice Spatial Processes Donghoh Kim November 30, 2004 Donghoh Kim 1/24 Go to 1234567891011121314151617 FULL Lattice Processes Model : The Ising Model (1925), The Potts Model

More information

Stat 451 Lecture Notes Monte Carlo Integration

Stat 451 Lecture Notes Monte Carlo Integration Stat 451 Lecture Notes 06 12 Monte Carlo Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 23 in Lange, and Chapters 3 4 in Robert & Casella 2 Updated:

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Bayesian inference for factor scores

Bayesian inference for factor scores Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

New Bayesian methods for model comparison

New Bayesian methods for model comparison Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison

More information

A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling

A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling Min-ge Xie Department of Statistics, Rutgers University Workshop on Higher-Order Asymptotics

More information

Time Series and Dynamic Models

Time Series and Dynamic Models Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial

More information

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE Donald A. Pierce Oregon State Univ (Emeritus), RERF Hiroshima (Retired), Oregon Health Sciences Univ (Adjunct) Ruggero Bellio Univ of Udine For Perugia

More information

Model comparison and selection

Model comparison and selection BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)

More information

Third-order inference for autocorrelation in nonlinear regression models

Third-order inference for autocorrelation in nonlinear regression models Third-order inference for autocorrelation in nonlinear regression models P. E. Nguimkeu M. Rekkas Abstract We propose third-order likelihood-based methods to derive highly accurate p-value approximations

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling

Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Jae-Kwang Kim 1 Iowa State University June 26, 2013 1 Joint work with Shu Yang Introduction 1 Introduction

More information

A BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain

A BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain A BAYESIAN MATHEMATICAL STATISTICS PRIMER José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es Bayesian Statistics is typically taught, if at all, after a prior exposure to frequentist

More information

Paradoxical Results in Multidimensional Item Response Theory

Paradoxical Results in Multidimensional Item Response Theory UNC, December 6, 2010 Paradoxical Results in Multidimensional Item Response Theory Giles Hooker and Matthew Finkelman UNC, December 6, 2010 1 / 49 Item Response Theory Educational Testing Traditional model

More information