Last week. posterior marginal density. exact conditional density. LTCC Likelihood Theory Week 3 November 19, /36

Size: px
Start display at page:

Download "Last week. posterior marginal density. exact conditional density. LTCC Likelihood Theory Week 3 November 19, /36"

Transcription

1 Last week Nuisance parameters f (y; ψ, λ), l(ψ, λ) posterior marginal density π m (ψ) =. c (2π) q el P(ψ) l P ( ˆψ) j P ( ˆψ) 1/2 π(ψ, ˆλ ψ ) j λλ ( ˆψ, ˆλ) 1/2 π( ˆψ, ˆλ) j λλ (ψ, ˆλ ψ ) 1/2 l p (ψ) = l(ψ, ˆλ ψ ) exact conditional density f (y; ψ, λ) = exp{ψ T s 1 + λ T s 2 k(ψ, λ)}h(y) f (s 1 s 2 ; ψ). = c (2π) q elp(ψ) lp( ˆψ) j p ( ˆψ) 1/2 j λλ( ˆψ, ˆλ) 1/2 j λλ (ψ, ˆλ ψ ) 1/2 LTCC Likelihood Theory Week 3 November 19, /36

2 ... last week Tail area approximation: p(ψ). = Φ(r ) = Φ(r + 1 r log Q r ), r = ± [2{l(ˆθ) l(θ)}] or ± [2{l p ( ˆψ) l p (ψ)}] Q = q B = l (θ)j 1/2 (ˆθ) π(ˆθ) π(θ) l p ( or 1/2 ˆψ)j p ( ˆψ) π( ˆψ, ˆλ) π(ψ, ˆλ ρ(ψ, ˆψ) ψ ) Q = q = (ˆθ θ)j 1/2 (ˆθ) or ( ˆψ ψ)j 1/2 p ( ˆψ)ρ 1 (ψ, ˆψ) ρ(ψ, ˆψ) = j λλ(ψ,ˆλ ψ ) 1/2 j λλ ( ˆψ,ˆλ) 1/2 LTCC Likelihood Theory Week 3 November 19, /36

3

4 This week 1. elimination of nuisance parameters and adjusted profile likelihood 2. approximate inference in location and transformation models 3. tail area approximations again 4. tangent exponential model 5. examples LTCC Likelihood Theory Week 3 November 19, /36

5 Nuisance parameters π m (ψ). = = c (2π) q elp(ψ) lp( ˆψ) j 1/2 p ( ˆψ) π(ψ, ˆλ ψ ) π( ˆψ, ˆλ) c (2π) q el A(ψ) l A ( ˆψ) j 1/2 p ( ˆψ) π(ψ, ˆλ ψ ) π( ˆψ, ˆλ) l A (ψ) = l p (ψ) 1 2 log j λλ(ψ, ˆλ ψ ) j λλ ( ˆψ, ˆλ) 1/2 j λλ (ψ, ˆλ ψ ) 1/2 { f (s 1 s 2 ; ψ) =. c (2π) q j p( ˆψ) 1/2 lp(ψ) lp( ˆψ) j λλ ( ˆψ, ˆλ) e j λλ (ψ, ˆλ ψ ) c (2π) q j p( ˆψ) 1/2 e l A(ψ) l A ( ˆψ) = } 1/2 l A (ψ) = l p (ψ) log j λλ(ψ, ˆλ ψ ) LTCC Likelihood Theory Week 3 November 19, /36

6 Adjusted profile log-likelihood l A (ψ) = l p (ψ) + A(ψ) = l(ψ, ˆλ ψ ) + A(ψ) A(ψ) assumed to be O p (1) generic form is A FR (ψ) = log j λλ(ψ, ˆλ ψ ) log d(λ) d ˆλ ψ Fraser, 2003 closely related A BN (ψ) = 1 2 log j λλ(ψ, ˆλ ψ ) + log d ˆλ d ˆλ ψ SM , BN 1983 if i ψλ (θ) = 0, then ˆλ ψ = ˆλ + O p (n 1 ), suggesting we ignore last term if ψ is scalar, then in principle we can find a parametrization (ψ, λ) in which i ψλ (θ) = 0 SM LTCC Likelihood Theory Week 3 November 19, /36

7 Log-likelihoods marginal l m (ψ) = log f m (t; ψ) f (y; ψ, λ) f m (t; ψ)f c (s t; ψ, λ) conditional l c (ψ) = log f c (s t; ψ) f (y; ψ, λ) f m (t; ψ, λ)f c (s t; ψ) adjusted l A (ψ) = l p (ψ) 1 2 log j λλ(ψ, ˆλ ψ ) λ ψ refinements l FR (ψ), l BN (ψ),... Sartori, 2003; Nov12 Prob 2.2 LTCC Likelihood Theory Week 3 November 19, /36

8

9

10 Marginal inference conditional inference linear exponential families saddlepoint approx marginal inference? Bayesian posterior transformation models Laplace approx exponential families non-linear parameter of interest tangent exponential models everything else LTCC Likelihood Theory Week 3 November 19, /36

11 Location model location model f (y i ; θ) = f 0 (y i θ), y i, θ R f (y; θ) = l(θ; y) = change of variable (y 1,..., y n ) (ˆθ, a 1,..., a n ) a i = f (ˆθ, a; θ) = LTCC Likelihood Theory Week 3 November 19, /36

12 ... location models f (ˆθ, a; θ) = f (a; θ) = f (ˆθ a; θ) = LTCC Likelihood Theory Week 3 November 19, /36

13 Approximate p-values ˆθ ˆθ f ( ˆϑ a; ϑ)d ˆϑ = c j(ˆθ) 1/2 e l(θ;ˆθ,a) l(ˆθ;ˆθ,a) d ˆθ r = ce 1 2 r 2 j(ˆθ) 1/2 dr = r e 1 2 r 2 r q dr = Φ(r + 1 r log q r ) l ;ˆθ(ˆθ; ˆθ, a) l ;ˆθ(θ; ˆθ, a) d ˆθ = {l ;ˆθ(ˆθ; ˆθ, a) l ;ˆθ(θ; ˆθ, a)}j(ˆθ) 1/2 = l θ (θ)j(ˆθ) 1/2 location model LTCC Likelihood Theory Week 3 November 19, /36

14 ... approximate p-values LTCC Likelihood Theory Week 3 November 19, /36

15 Nuisance parameters Regression-scale models: y i = x T i β + σɛ i, ɛ i f 0 ( ) f (y; β, σ 2 ) = f c ( ˆβ, ˆσ a; σ, β)f m (a) a = parameter of interest ˆβ j, say t j = ˆβ j β j v j has marginal density free of β (j), σ t p+1 = log ˆσ log σ has marginal density free of β v j = Var( ˆβ j ) = p-value functions using Laplace approximation again LTCC Likelihood Theory Week 3 November 19, /36

16 These are all special cases to compute p-value we need to integrate a density approximation this density approximation is to either a marginal or a conditional density the integration involves the derivative of the log-likelihood, with respect to the data simplify the density approximation to incorporate only this first derivative use the resulting simpler form as the basis for approximate likelihood based inference LTCC Likelihood Theory Week 3 November 19, /36

17 Tangent exponential model f TEM (s; θ) = c e l{ϕ(θ);y}+ϕ(θ)t s j ϕϕ { ˆϕ(s); s} 1/2 p-value = Φ(r ) = Φ(r + 1 r log Q r ) Q = {ϕ(ˆθ) ϕ(θ)}j 1/2 ϕϕ ( ˆϕ) = ϕ(ˆθ) ϕ(θ) ϕ θ (ˆθ) j 1/2 θθ (ˆθ) Q = ϕ(ˆθ) ϕ(ˆθ ψ ) ϕ λ (ˆθ ψ ) ϕ θ (ˆθ) j(ˆθ) 1/2 j λλ (ˆθ ψ ) 1/2 LTCC Likelihood Theory Week 3 November 19, /36

18

19 .. tangent exponential model ϕ(θ) = l ;V (θ; y) = n i=1 l(θ; y) V i y i V i = ( zi y i ) 1 z i θ (ˆθ,y) BDR, Ch. 8.4, 8.5 LTCC Likelihood Theory Week 3 November 19, /36

20 Example: top quark model Y Poisson (µ + b), data y = 27 b = 6.7 b known mid p-value Pr(Y > 27) + 1 2Pr(Y = 27) = ( approximation: Φ(r ) = Φ r + 1 r log q ) = r continuity correction: Φ{r (y )} = Pr(Y 27) = Abe et al., 1995 LTCC Likelihood Theory Week 3 November 19, /36

21 ... top quark pr(y y 0 ; µ = 0) pr(y > y 0 ; µ = 0) Exact Mid-p Φ(r ) N(θ, θ) N(θ, ˆθ) LTCC Likelihood Theory Week 3 November 19, /36

22 Two-sample comparison data on cost of treatment: standard vs new treatment Group Group model Y 1i Exp(µ 1 ), Y 2i Exp(µ 2 ) ψ = µ 1 /µ 2 Exact inference (Ȳ1. µ 1 ) / (Ȳ2. µ 2 ) F 2n,2m exact 95% confidence interval for ψ: (0.98, 4.19) approx 95% confidence interval for ψ: (0.98, 4.185) Evans et al LTCC Likelihood Theory Week 3 November 19, /36

23 Logistic regression The first ten out of 79 sets of observations on the physical characteristics of urine. Presence/absence of calcium oxalate crystals is indicated by 1/0. Two cases with missing values. Case Crystals Specific gravity ph Osmolarity Conductivity Urea Calcium Andrews & Herzberg, 1985 LTCC Likelihood Theory Week 3 November 19, /36

24 Model: Independent binary responses Y 1,..., Y n with Pr(Y i = 1) = Fitting generalized linear model in R: exp(x T i β) 1 + exp(x T i β) data(urine) fit <- glm(r gravity+ph+osmo+cond+urea+calc, family = binomial, data=urine) summary(fit) Estimate Std. Error z value Pr(> z ) (Intercept) gravity ph osmo conduct urea * calc ** LTCC Likelihood Theory Week 3 November 19, /36

25 A closer look at coefficient of urea method lower bound upper bound p-value for 0 Φ(q) Φ(r) Φ(r ) library(cond) # part of package hoa on cran-r urine.cond.urea <- cond.glm(urine.glm,offset=urea) > summary(urine.cond.urea,test=0)... Test statistics hypothesis : coef( urea ) = 0 statistic tail prob. Wald pivot Wald pivot (cond. MLE) Likelihood root Modified likelihood root Modified likelihood root (cont. corr.) LTCC Likelihood Theory Week 3 November 19, /36

26 A closer look at coefficient of urea method lower bound upper bound p-value for 0 Φ(q) Φ(r) Φ(r ) Profile and modified profile log likelihoods log likelihood profile log likelihood modified profile log likelihood LTCC Likelihood Theory Week 3 November 19, /36

27 Several 2 2 tables. Institution y 1 m 1 y 2 m 2 Institution y 1 m 1 y 2 m Lipsitz et al. 1988: Biometrics LTCC Likelihood Theory Week 3 November 19, /36

28 ... matched pairs Model: Y 1i Binomial(m 1i, p 1i ) Y 2i Binomial(m 2i, p 2i ) parameter of interest ψ = p 2i p 1i nuisance parameters p 1i, i = 1, inference for ψ : lower upper point estimate p-value for ψ = 0 Φ(r) Φ(r ) LTCC Likelihood Theory Week 3 November 19, /36

29 Pivot (psi) Log likelihood psi likelihood root, r, q psi profile log likelihood, modified profile

30 Example G: Cox & Snell, 1980 cost date T1 T2 cap PR NE CT BW N PT n = 32, d = 8 LTCC Likelihood Theory Week 3 November 19, /36

31 ν log(n) PT First order Third order First order Third order LTCC Likelihood Theory Week 3 November 19, /36 Linear regression, non-normal error Model Y i = β 0 + x T i β + σɛ i ɛ N(0, 1) or ɛ t ν Normal t 4, first order t 4, third order Est (SE) z Est (SE) z Est (SE) z Constant (3.140) (3.67) (3.70) 3.21 date (0.043) (0.048) (0.049) 4.02 log(cap) (0.119) (0.113) (0.129) 5.31 NE (0.074) (0.077) (0.080) 2.97 CT (0.060) (0.054) (0.063) 2.26 log(n) (0.042) (0.043) (0.048) 1.51 PT (0.114) (0.101) (0.110) 2.42

32 library(marg) # part of package hoa on cran-r data(nuclear) # Fit normal-theory linear model and examine its contents: nuc.norm <- lm( log(cost) date + log(cap) + NE + CT + log(n) + PT, + data = nuclear ) summary(nuc.norm) # Fit linear model with t errors and 4 df and examine its contents: nuc.t4 <- rsm( log(cost) date + log(cap) + NE + CT + log(n) + PT, + data = nuclear, family = student(4) ) summary(nuc.t4) plot(nuc.t4) # Conditional analysis for partial turnkey guarantee: nuc.t4.pt <- cond( nuc.t4, offset = PT ) summary(nuc.t4.pt) plot(nuc.t4.pt) # For conditional analysis for other covariates, replace pt by # log(n),...

33 Type II censored data 40 units on test, 28 failures at (log) times Weibull model: f (y; µ, σ) = e (y µ)/σ exp{ e (y µ)/σ } 90% confidence intervals µ σ Φ(r) ( 0.116, 0.476) (0.700, 1.217) Φ(r ) ( 0.107, 0.510) (0.743, 1.320) Exact (num. int.) ( 0.11, 0.51) (0.724, 1.277) Lawless 2003 Ch.5; Wong & Wu 2003 LTCC Likelihood Theory Week 3 November 19, /36

34 Vector parameter of interest use tangent exponential model (or usual exponential family model) construct a scalar parameter of interest representing direction in sample space apply higher order approximation Example y N(µ, Σ); H 0 : Σ 1 is tri-diagonal First-order Markov dependence in a graphical model Nominal (%) First order Second order LTCC Likelihood Theory Week 3 November 19, /36

35

36

Applied Asymptotics Case studies in higher order inference

Applied Asymptotics Case studies in higher order inference Applied Asymptotics Case studies in higher order inference Nancy Reid May 18, 2006 A.C. Davison, A. R. Brazzale, A. M. Staicu Introduction likelihood-based inference in parametric models higher order approximations

More information

Likelihood inference in the presence of nuisance parameters

Likelihood inference in the presence of nuisance parameters Likelihood inference in the presence of nuisance parameters Nancy Reid, University of Toronto www.utstat.utoronto.ca/reid/research 1. Notation, Fisher information, orthogonal parameters 2. Likelihood inference

More information

Bayesian and frequentist inference

Bayesian and frequentist inference Bayesian and frequentist inference Nancy Reid March 26, 2007 Don Fraser, Ana-Maria Staicu Overview Methods of inference Asymptotic theory Approximate posteriors matching priors Examples Logistic regression

More information

Accurate directional inference for vector parameters

Accurate directional inference for vector parameters Accurate directional inference for vector parameters Nancy Reid February 26, 2016 with Don Fraser, Nicola Sartori, Anthony Davison Nancy Reid Accurate directional inference for vector parameters York University

More information

Accurate directional inference for vector parameters

Accurate directional inference for vector parameters Accurate directional inference for vector parameters Nancy Reid October 28, 2016 with Don Fraser, Nicola Sartori, Anthony Davison Parametric models and likelihood model f (y; θ), θ R p data y = (y 1,...,

More information

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6.

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6. Approximating models Nancy Reid, University of Toronto Oxford, February 6 www.utstat.utoronto.reid/research 1 1. Context Likelihood based inference model f(y; θ), log likelihood function l(θ; y) y = (y

More information

Likelihood Inference in the Presence of Nuisance Parameters

Likelihood Inference in the Presence of Nuisance Parameters PHYSTAT2003, SLAC, September 8-11, 2003 1 Likelihood Inference in the Presence of Nuance Parameters N. Reid, D.A.S. Fraser Department of Stattics, University of Toronto, Toronto Canada M5S 3G3 We describe

More information

Likelihood Inference in the Presence of Nuisance Parameters

Likelihood Inference in the Presence of Nuisance Parameters Likelihood Inference in the Presence of Nuance Parameters N Reid, DAS Fraser Department of Stattics, University of Toronto, Toronto Canada M5S 3G3 We describe some recent approaches to likelihood based

More information

Default priors and model parametrization

Default priors and model parametrization 1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)

More information

Nancy Reid SS 6002A Office Hours by appointment

Nancy Reid SS 6002A Office Hours by appointment Nancy Reid SS 6002A reid@utstat.utoronto.ca Office Hours by appointment Light touch assessment One or two problems assigned weekly graded during Reading Week http://www.utstat.toronto.edu/reid/4508s14.html

More information

ASSESSING A VECTOR PARAMETER

ASSESSING A VECTOR PARAMETER SUMMARY ASSESSING A VECTOR PARAMETER By D.A.S. Fraser and N. Reid Department of Statistics, University of Toronto St. George Street, Toronto, Canada M5S 3G3 dfraser@utstat.toronto.edu Some key words. Ancillary;

More information

Nancy Reid SS 6002A Office Hours by appointment

Nancy Reid SS 6002A Office Hours by appointment Nancy Reid SS 6002A reid@utstat.utoronto.ca Office Hours by appointment Problems assigned weekly, due the following week http://www.utstat.toronto.edu/reid/4508s16.html Various types of likelihood 1. likelihood,

More information

DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY

DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY Journal of Statistical Research 200x, Vol. xx, No. xx, pp. xx-xx ISSN 0256-422 X DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY D. A. S. FRASER Department of Statistical Sciences,

More information

hoa: An R Package Bundle for Higher Order Likelihood Inference

hoa: An R Package Bundle for Higher Order Likelihood Inference hoa: An R Package Bundle for Higher Order Likelihood Inference by Alessandra R. Brazzale Rnews, 5/1 May 2005, pp. 20 27 Introduction The likelihood function represents the basic ingredient of many commonly

More information

ASYMPTOTICS AND THE THEORY OF INFERENCE

ASYMPTOTICS AND THE THEORY OF INFERENCE ASYMPTOTICS AND THE THEORY OF INFERENCE N. Reid University of Toronto Abstract Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution

More information

Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors.

Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors. Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors. Early version, also known as pre-print Link to publication record in Explore Bristol Research PDF-document University

More information

Likelihood and Asymptotic Theory for Statistical Inference

Likelihood and Asymptotic Theory for Statistical Inference Likelihood and Asymptotic Theory for Statistical Inference Nancy Reid 020 7679 1863 reid@utstat.utoronto.ca n.reid@ucl.ac.uk http://www.utstat.toronto.edu/reid/ltccf12.html LTCC Likelihood Theory Week

More information

Marginal Posterior Simulation via Higher-order Tail Area Approximations

Marginal Posterior Simulation via Higher-order Tail Area Approximations Bayesian Analysis (2014) 9, Number 1, pp. 129 146 Marginal Posterior Simulation via Higher-order Tail Area Approximations Erlis Ruli, Nicola Sartori and Laura Ventura Abstract. A new method for posterior

More information

Measuring nuisance parameter effects in Bayesian inference

Measuring nuisance parameter effects in Bayesian inference Measuring nuisance parameter effects in Bayesian inference Alastair Young Imperial College London WHOA-PSI-2017 1 / 31 Acknowledgements: Tom DiCiccio, Cornell University; Daniel Garcia Rasines, Imperial

More information

Approximate Inference for the Multinomial Logit Model

Approximate Inference for the Multinomial Logit Model Approximate Inference for the Multinomial Logit Model M.Rekkas Abstract Higher order asymptotic theory is used to derive p-values that achieve superior accuracy compared to the p-values obtained from traditional

More information

Likelihood and Asymptotic Theory for Statistical Inference

Likelihood and Asymptotic Theory for Statistical Inference Likelihood and Asymptotic Theory for Statistical Inference Nancy Reid 020 7679 1863 reid@utstat.utoronto.ca n.reid@ucl.ac.uk http://www.utstat.toronto.edu/reid/ltccf12.html LTCC Likelihood Theory Week

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

The formal relationship between analytic and bootstrap approaches to parametric inference

The formal relationship between analytic and bootstrap approaches to parametric inference The formal relationship between analytic and bootstrap approaches to parametric inference T.J. DiCiccio Cornell University, Ithaca, NY 14853, U.S.A. T.A. Kuffner Washington University in St. Louis, St.

More information

Third-order inference for autocorrelation in nonlinear regression models

Third-order inference for autocorrelation in nonlinear regression models Third-order inference for autocorrelation in nonlinear regression models P. E. Nguimkeu M. Rekkas Abstract We propose third-order likelihood-based methods to derive highly accurate p-value approximations

More information

Default priors for Bayesian and frequentist inference

Default priors for Bayesian and frequentist inference Default priors for Bayesian and frequentist inference D.A.S. Fraser and N. Reid University of Toronto, Canada E. Marras Centre for Advanced Studies and Development, Sardinia University of Rome La Sapienza,

More information

An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models

An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models Pierre Nguimkeu Georgia State University Abstract This paper proposes an improved likelihood-based method

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Improved Inference for First Order Autocorrelation using Likelihood Analysis

Improved Inference for First Order Autocorrelation using Likelihood Analysis Improved Inference for First Order Autocorrelation using Likelihood Analysis M. Rekkas Y. Sun A. Wong Abstract Testing for first-order autocorrelation in small samples using the standard asymptotic test

More information

Marginal posterior simulation via higher-order tail area approximations

Marginal posterior simulation via higher-order tail area approximations Bayesian Analysis (2004) 1, Number 1, pp. 1 13 Marginal posterior simulation via higher-order tail area approximations Erlis Ruli, Nicola Sartori and Laura Ventura Abstract. A new method for posterior

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Accurate directional inference for vector parameters in linear exponential families

Accurate directional inference for vector parameters in linear exponential families Accurate directional inference for vector parameters in linear exponential families A. C. Davison, D. A. S. Fraser, N. Reid and N. Sartori August 27, 2013 Abstract We consider inference on a vector-valued

More information

Beyond GLM and likelihood

Beyond GLM and likelihood Stat 6620: Applied Linear Models Department of Statistics Western Michigan University Statistics curriculum Core knowledge (modeling and estimation) Math stat 1 (probability, distributions, convergence

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Integrated likelihoods in survival models for highlystratified

Integrated likelihoods in survival models for highlystratified Working Paper Series, N. 1, January 2014 Integrated likelihoods in survival models for highlystratified censored data Giuliana Cortese Department of Statistical Sciences University of Padua Italy Nicola

More information

Likelihood based Statistical Inference. Dottorato in Economia e Finanza Dipartimento di Scienze Economiche Univ. di Verona

Likelihood based Statistical Inference. Dottorato in Economia e Finanza Dipartimento di Scienze Economiche Univ. di Verona Likelihood based Statistical Inference Dottorato in Economia e Finanza Dipartimento di Scienze Economiche Univ. di Verona L. Pace, A. Salvan, N. Sartori Udine, April 2008 Likelihood: observed quantities,

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Nuisance parameters and their treatment

Nuisance parameters and their treatment BS2 Statistical Inference, Lecture 2, Hilary Term 2008 April 2, 2008 Ancillarity Inference principles Completeness A statistic A = a(x ) is said to be ancillary if (i) The distribution of A does not depend

More information

New Bayesian methods for model comparison

New Bayesian methods for model comparison Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison

More information

Improved Inference for Moving Average Disturbances in Nonlinear Regression Models

Improved Inference for Moving Average Disturbances in Nonlinear Regression Models Improved Inference for Moving Average Disturbances in Nonlinear Regression Models Pierre Nguimkeu Georgia State University November 22, 2013 Abstract This paper proposes an improved likelihood-based method

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Modern likelihood inference for the parameter of skewness: An application to monozygotic

Modern likelihood inference for the parameter of skewness: An application to monozygotic Working Paper Series, N. 10, December 2013 Modern likelihood inference for the parameter of skewness: An application to monozygotic twin studies Mameli Valentina Department of Mathematics and Computer

More information

ANCILLARY STATISTICS: A REVIEW

ANCILLARY STATISTICS: A REVIEW Statistica Sinica 20 (2010), 1309-1332 ANCILLARY STATISTICS: A REVIEW M. Ghosh 1, N. Reid 2 and D. A. S. Fraser 2 1 University of Florida and 2 University of Toronto Abstract: In a parametric statistical

More information

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE Donald A. Pierce Oregon State Univ (Emeritus), RERF Hiroshima (Retired), Oregon Health Sciences Univ (Adjunct) Ruggero Bellio Univ of Udine For Perugia

More information

Answer Key for STAT 200B HW No. 7

Answer Key for STAT 200B HW No. 7 Answer Key for STAT 200B HW No. 7 May 5, 2007 Problem 2.2 p. 649 Assuming binomial 2-sample model ˆπ =.75, ˆπ 2 =.6. a ˆτ = ˆπ 2 ˆπ =.5. From Ex. 2.5a on page 644: ˆπ ˆπ + ˆπ 2 ˆπ 2.75.25.6.4 = + =.087;

More information

ANCILLARY STATISTICS: A REVIEW

ANCILLARY STATISTICS: A REVIEW 1 ANCILLARY STATISTICS: A REVIEW M. Ghosh, N. Reid and D.A.S. Fraser University of Florida and University of Toronto Abstract: In a parametric statistical model, a function of the data is said to be ancillary

More information

Bayesian Asymptotics

Bayesian Asymptotics BS2 Statistical Inference, Lecture 8, Hilary Term 2008 May 7, 2008 The univariate case The multivariate case For large λ we have the approximation I = b a e λg(y) h(y) dy = e λg(y ) h(y ) 2π λg (y ) {

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

More on nuisance parameters

More on nuisance parameters BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Stat 5102 Final Exam May 14, 2015

Stat 5102 Final Exam May 14, 2015 Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions

More information

Aspects of Likelihood Inference

Aspects of Likelihood Inference Submitted to the Bernoulli Aspects of Likelihood Inference NANCY REID 1 1 Department of Statistics University of Toronto 100 St. George St. Toronto, Canada M5S 3G3 E-mail: reid@utstat.utoronto.ca, url:

More information

Loglikelihood and Confidence Intervals

Loglikelihood and Confidence Intervals Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,

More information

Bootstrap and Parametric Inference: Successes and Challenges

Bootstrap and Parametric Inference: Successes and Challenges Bootstrap and Parametric Inference: Successes and Challenges G. Alastair Young Department of Mathematics Imperial College London Newton Institute, January 2008 Overview Overview Review key aspects of frequentist

More information

PARAMETER CURVATURE REVISITED AND THE BAYES-FREQUENTIST DIVERGENCE.

PARAMETER CURVATURE REVISITED AND THE BAYES-FREQUENTIST DIVERGENCE. Journal of Statistical Research 200x, Vol. xx, No. xx, pp. xx-xx Bangladesh ISSN 0256-422 X PARAMETER CURVATURE REVISITED AND THE BAYES-FREQUENTIST DIVERGENCE. A.M. FRASER Department of Mathematics, University

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

Analysis of Time-to-Event Data: Chapter 4 - Parametric regression models

Analysis of Time-to-Event Data: Chapter 4 - Parametric regression models Analysis of Time-to-Event Data: Chapter 4 - Parametric regression models Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/25 Right censored

More information

Topic 12 Overview of Estimation

Topic 12 Overview of Estimation Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the

More information

Recap. Vector observation: Y f (y; θ), Y Y R m, θ R d. sample of independent vectors y 1,..., y n. pairwise log-likelihood n m. weights are often 1

Recap. Vector observation: Y f (y; θ), Y Y R m, θ R d. sample of independent vectors y 1,..., y n. pairwise log-likelihood n m. weights are often 1 Recap Vector observation: Y f (y; θ), Y Y R m, θ R d sample of independent vectors y 1,..., y n pairwise log-likelihood n m i=1 r=1 s>r w rs log f 2 (y ir, y is ; θ) weights are often 1 more generally,

More information

Various types of likelihood

Various types of likelihood Various types of likelihood 1. likelihood, marginal likelihood, conditional likelihood, profile likelihood, adjusted profile likelihood 2. semi-parametric likelihood, partial likelihood 3. empirical likelihood,

More information

Bayesian inference for factor scores

Bayesian inference for factor scores Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor

More information

Model comparison and selection

Model comparison and selection BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)

More information

MAS3301 Bayesian Statistics Problems 5 and Solutions

MAS3301 Bayesian Statistics Problems 5 and Solutions MAS3301 Bayesian Statistics Problems 5 and Solutions Semester 008-9 Problems 5 1. (Some of this question is also in Problems 4). I recorded the attendance of students at tutorials for a module. Suppose

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

Multistate Modeling and Applications

Multistate Modeling and Applications Multistate Modeling and Applications Yang Yang Department of Statistics University of Michigan, Ann Arbor IBM Research Graduate Student Workshop: Statistics for a Smarter Planet Yang Yang (UM, Ann Arbor)

More information

Bayesian Model Comparison

Bayesian Model Comparison BS2 Statistical Inference, Lecture 11, Hilary Term 2009 February 26, 2009 Basic result An accurate approximation Asymptotic posterior distribution An integral of form I = b a e λg(y) h(y) dy where h(y)

More information

COMBINING p-values: A DEFINITIVE PROCESS. Galley

COMBINING p-values: A DEFINITIVE PROCESS. Galley 0 Journal of Statistical Research ISSN 0 - X 00, Vol., No., pp. - Bangladesh COMBINING p-values: A DEFINITIVE PROCESS D.A.S. Fraser Department of Statistics, University of Toronto, Toronto, Canada MS G

More information

Answer Key for STAT 200B HW No. 8

Answer Key for STAT 200B HW No. 8 Answer Key for STAT 200B HW No. 8 May 8, 2007 Problem 3.42 p. 708 The values of Ȳ for x 00, 0, 20, 30 are 5/40, 0, 20/50, and, respectively. From Corollary 3.5 it follows that MLE exists i G is identiable

More information

Accurate Directional Inference for Vector Parameters in Linear Exponential Families

Accurate Directional Inference for Vector Parameters in Linear Exponential Families Accurate Directional Inference for Vector Parameters in Linear Exponential Families A. C. DAVISON,D.A.S.FRASER, N.REID, and N. SARTORI Q1 5 10 We consider inference on a vector-valued parameter of interest

More information

MAS3301 / MAS8311 Biostatistics Part II: Survival

MAS3301 / MAS8311 Biostatistics Part II: Survival MAS330 / MAS83 Biostatistics Part II: Survival M. Farrow School of Mathematics and Statistics Newcastle University Semester 2, 2009-0 8 Parametric models 8. Introduction In the last few sections (the KM

More information

Association studies and regression

Association studies and regression Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j Standard Errors & Confidence Intervals β β asy N(0, I( β) 1 ), where I( β) = [ 2 l(β, φ; y) ] β i β β= β j We can obtain asymptotic 100(1 α)% confidence intervals for β j using: β j ± Z 1 α/2 se( β j )

More information

Stat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota

Stat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota Stat 5102 Lecture Slides Deck 3 Charles J. Geyer School of Statistics University of Minnesota 1 Likelihood Inference We have learned one very general method of estimation: method of moments. the Now we

More information

Generalized Linear Mixed-Effects Models. Copyright c 2015 Dan Nettleton (Iowa State University) Statistics / 58

Generalized Linear Mixed-Effects Models. Copyright c 2015 Dan Nettleton (Iowa State University) Statistics / 58 Generalized Linear Mixed-Effects Models Copyright c 2015 Dan Nettleton (Iowa State University) Statistics 510 1 / 58 Reconsideration of the Plant Fungus Example Consider again the experiment designed to

More information

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION

More information

ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS

ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, N.C.,

More information

Exponential Models: Approximations for Probabilities

Exponential Models: Approximations for Probabilities JIRSS (2011) Vol. 10, No. 2, pp 95-107 Exponential Models: Approximations for Probabilities D. A. S. Fraser 1,2,A.Naderi 3, Kexin Ji 1,WeiLin 1, Jie Su 1 1 Department of Statistics, University of Toronto,

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

,..., θ(2),..., θ(n)

,..., θ(2),..., θ(n) Likelihoods for Multivariate Binary Data Log-Linear Model We have 2 n 1 distinct probabilities, but we wish to consider formulations that allow more parsimonious descriptions as a function of covariates.

More information

Linear Regression Models P8111

Linear Regression Models P8111 Linear Regression Models P8111 Lecture 25 Jeff Goldsmith April 26, 2016 1 of 37 Today s Lecture Logistic regression / GLMs Model framework Interpretation Estimation 2 of 37 Linear regression Course started

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression Reading: Hoff Chapter 9 November 4, 2009 Problem Data: Observe pairs (Y i,x i ),i = 1,... n Response or dependent variable Y Predictor or independent variable X GOALS: Exploring

More information

Semiparametric Regression

Semiparametric Regression Semiparametric Regression Patrick Breheny October 22 Patrick Breheny Survival Data Analysis (BIOS 7210) 1/23 Introduction Over the past few weeks, we ve introduced a variety of regression models under

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

The Relationship Between the Power Prior and Hierarchical Models

The Relationship Between the Power Prior and Hierarchical Models Bayesian Analysis 006, Number 3, pp. 55 574 The Relationship Between the Power Prior and Hierarchical Models Ming-Hui Chen, and Joseph G. Ibrahim Abstract. The power prior has emerged as a useful informative

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 15: Examples of hypothesis tests (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 32 The recipe 2 / 32 The hypothesis testing recipe In this lecture we repeatedly apply the

More information

Practical Econometrics. for. Finance and Economics. (Econometrics 2)

Practical Econometrics. for. Finance and Economics. (Econometrics 2) Practical Econometrics for Finance and Economics (Econometrics 2) Seppo Pynnönen and Bernd Pape Department of Mathematics and Statistics, University of Vaasa 1. Introduction 1.1 Econometrics Econometrics

More information

8. Parametric models in survival analysis General accelerated failure time models for parametric regression

8. Parametric models in survival analysis General accelerated failure time models for parametric regression 8. Parametric models in survival analysis 8.1. General accelerated failure time models for parametric regression The accelerated failure time model Let T be the time to event and x be a vector of covariates.

More information

Model Checking and Improvement

Model Checking and Improvement Model Checking and Improvement Statistics 220 Spring 2005 Copyright c 2005 by Mark E. Irwin Model Checking All models are wrong but some models are useful George E. P. Box So far we have looked at a number

More information

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods PhD School in Statistics cycle XXVI, 2011 Theory and Methods of Statistical Inference PART I Frequentist theory and methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Jonathan Marchini Department of Statistics University of Oxford MT 2013 Jonathan Marchini (University of Oxford) BS2a MT 2013 1 / 27 Course arrangements Lectures M.2

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials

Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials Progress, Updates, Problems William Jen Hoe Koh May 9, 2013 Overview Marginal vs Conditional What is TMLE? Key Estimation

More information

1/15. Over or under dispersion Problem

1/15. Over or under dispersion Problem 1/15 Over or under dispersion Problem 2/15 Example 1: dogs and owners data set In the dogs and owners example, we had some concerns about the dependence among the measurements from each individual. Let

More information

Sample size determination for logistic regression: A simulation study

Sample size determination for logistic regression: A simulation study Sample size determination for logistic regression: A simulation study Stephen Bush School of Mathematical Sciences, University of Technology Sydney, PO Box 123 Broadway NSW 2007, Australia Abstract This

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

STAT 4385 Topic 01: Introduction & Review

STAT 4385 Topic 01: Introduction & Review STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics

More information

For more information about how to cite these materials visit

For more information about how to cite these materials visit Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/

More information