Accurate directional inference for vector parameters

Size: px
Start display at page:

Download "Accurate directional inference for vector parameters"

Transcription

1 Accurate directional inference for vector parameters Nancy Reid February 26, 2016 with Don Fraser, Nicola Sartori, Anthony Davison Nancy Reid Accurate directional inference for vector parameters York University February /29

2 Parametric models and likelihood model f (y; θ), θ R p data y = (y 1,..., y n ) independent observations log-likelihood function l(θ; y) = log f (y; θ) parameter of interest θ = (ψ, λ), ψ R d likelihood inference w(ψ) = 2{l( ˆψ, ˆλ) l(ψ, ˆλ ψ )}. χ 2 d Likelihood Contours psi w B (ψ) = w(ψ) 1 + B(ψ) { w (ψ) = w(ψ) 1. χ 2 d O p (n 2 ) B(ψ) = E{w(ψ)}/d } log γ(ψ) w(ψ) psi1 Skovgaard, 2001 Nancy Reid Accurate directional inference for vector parameters York University February /29

3 Example: 2 3 contingency table activity amongst psychiatric patients Everitt, 1992 Affective disorders Schizophrenics Neurotics Retarded Not retarded model: log-linear y Poisson, log{e(y)} = X θ, θ R 6 log-likelihood l(θ; y) = θ X y 1 e X θ = θ s c(θ) θ = (ψ, λ) ψ R 2, λ R 4 (ψ 1, ψ 2 ) interaction parameters H 0 : ψ = ψ 0 = (0, 0) independence log-likelihood l(ψ, λ; y) = ψ s 1 + λ s 2 c(ψ, λ) Nancy Reid Accurate directional inference for vector parameters York University February /29

4 Testing ψ = 0 Likelihood Contours psi w(ψ 0 ). χ 2 2 p-value w (ψ 0 ) p-value psi1 directional p-value exact p-value t Nancy Reid Accurate directional inference for vector parameters York University February /29

5 model f (y; θ) = exp{ϕ(θ) T u(y) c{ϕ(θ)} d(y)} y 1,..., y n i.i.d sufficient statistic f (s; θ) = {y:s(y)=s} f (y; θ)dy = exp{ϕ(θ)t s nc{ϕ(θ)} d(s)} reduce dimension from n to p by marginalization linear parameter of interest ϕ(θ) = θ = (ψ, λ) model f (s 1, s 2 ; ψ, λ) = exp{ψ T s 1 + λ T s 2 nc(ψ, λ) d(s)} conditional density f (s 1 s 2 ; ψ) = exp{ψ T s 1 n c 2 (ψ) d 2 (s 1 )} reduce dimension from p to d by conditioning Nancy Reid Accurate directional inference for vector parameters York University February /29

6 ... conditional density f (s 1 s 2 ; ψ) = exp{ψ T s 1 n c 2 (ψ) d 2 (s 1 )} f (s 1 s 2 ; ψ) = f (s 1, s 2 ; ψ, λ) f (s 2 ; ψ, λ) f (s; ψ, λ) with s 2 held fixed s 2 held fixed ˆλ ψ held fixed (ˆθ ψ = (ψ, ˆλ ψ ) saddlepoint approximation: f (s 1 s 2 ; ψ). = c exp[l(ˆθ 0 ψ ; s) l{ˆθ(s); s}] j{ˆθ(s); s} 1/2, s L 0 ψ centering l(θ; s) = ψ T s 1 + λ T s 2 + l(θ; y 0 ) fix s 0 = s(y 0 ) = 0 plane L 0 ψ = {s Rp : s 2 = 0} = {s R p : ˆλ ψ = ˆλ 0 ψ } ˆθ(s) : l(θ; s)/ θ = 0 m.l.e. as a function of the variable Nancy Reid Accurate directional inference for vector parameters York University February /29

7 ... conditional density f (s 1 s 2 ; ψ). = c exp[l(ˆθ 0 ψ ; s) l{ˆθ(s); s}] j{ˆθ(s); s} 1/2, s L 0 ψ Affective disorders Schizophrenics Neurotics Retarded Not retarded s 1 R 2, s 2 R 4 L 0 ψ : all 2 3 tables with the same row and column totals Nancy Reid Accurate directional inference for vector parameters York University February /29

8 Directional tests measure the directed departure from H 0 in L 0 ψ s ψ : expected value of s 1, under H 0 s 0 : observed value of s 1 = 0 from centering L ψ : line through these two points L ψ = ts0 + (1 t)(s ψ s 0 ), t R Relative log likelihood S(t) psi S Nancy Reid Accurate directional inference for vector parameters York University February /29 psi1 S1

9 ... directional tests L ψ = ts0 + (1 t)s ψ Relative log likelihood S(t) psi S psi S1 null hypothesis of independence t = 0 x observed value of s t = 1 p-value compares probability from to to that from to along the line in the sample space curve in the parameter space like a 2-sided p-value Pr ( response > observed response > 0) Nancy Reid Accurate directional inference for vector parameters York University February /29

10 ... directional p-value p-value = 1 t d 1 f {s(t); ψ}dt 0 t d 1 f {s(t); ψ}dt t d 1 from change to polar coordinates Simplifications: s(t) along the line L ψ s, conditional on s/ s 1: L ψ L0 ψ Rp 2: ratio of two integrals drop any terms that don t depend on t 3: saddlepoint approximation f (s; ψ). = c exp[l(ˆθ 0 ψ; s) l{ˆθ(s); s}] j{ˆθ(s); s} 1/2, s L 0 ψ Nancy Reid Accurate directional inference for vector parameters York University February /29

11 2 3 table h(t) t = t d 1 h(t) t = t = t = Nancy Reid Accurate directional inferencetfor vector parameters York University February /29

12 table t=0 t=0.5 psi psi psi psi1 t=1 t=2 psi psi Nancy Reid Accurate directional inference for vector parameters York University February /29 psi1 psi1

13 table simulations Nominal LRT Directional Skovgaard, Nominal LRT Directional Skovgaard, Nancy Reid Accurate directional inference for vector parameters York University February /29

14 Example: comparison of normal variances diameter F -test batch Likelihood ratio statistic Directional Skovgaard, Bartlett s test Nancy Reid Accurate directional inference for vector parameters York University February /29

15 Simulations 3 groups, 10 observations per group Likelihood ratio statistic Bartlett s test p values p values Uniform quantiles Uniform quantiles directional Skovgaard s W* p values p values Uniform quantiles Uniform quantiles Nancy Reid Accurate directional inference for vector parameters York University February /29

16 Simulations 3 groups, 5 observations per group Likelihood ratio statistic Bartlett s test p values p values Uniform quantiles Uniform quantiles directional Skovgaard s W* p values p values Uniform quantiles Uniform quantiles Nancy Reid Accurate directional inference for vector parameters York University February /29

17 Simulations 1000 groups, 5 observations per group Likelihood ratio statistic Bartlett s test p values p values Uniform quantiles Uniform quantiles directional Skovgaard s W* p values p values Uniform quantiles Uniform quantiles dimension ψ 999; dimension nuisance 1001 Nancy Reid Accurate directional inference for vector parameters York University February /29

18 Example: covariance selection model y i N q (µ, Λ 1 ) inverse covariance matrix linear exponential family l(θ; y) = n 2 log Λ 1 2 tr(λy T y) + 1 T yξ n 2 ξt Λξ s 1, s 2 θ = (ξ, Λ) = (Λµ, Λ) H 0 : some off-diagonal elements of Λ are 0 ψ 1 = = ψ d = 0 conditional independence need constrained m.l.e: use fitcongraph in ggm ˆΛ 1 (t) = t ˆΛ 1 + (1 t)ˆλ 1 0 m.l.e. along the line f {s(t); ψ} t ˆΛ 1 + (1 t)ˆλ 1 0 (n q 2)/2 Nancy Reid Accurate directional inference for vector parameters York University February /29

19 ... covariance selection simulations Nominal LRT Directional Skovgaard, Nominal LRT Directional Skovgaard, Covariance matrix 11 11; dimension of ψ = 45 first order Markov dependence Nancy Reid Accurate directional inference for vector parameters York University February /29

20 Tangent exponential model n p (dimension of y to dimension of θ) Every model f (y; θ) on R n can be approximated by an exponential family model: f TEM (s; θ)ds = exp{ϕ(θ) s + l(θ)}h(s)ds s is a score variable on R p s(y) = l ϕ(ˆθ 0 ; y) l(θ) = l(θ; y 0 ) is the observed log-likelihood function ϕ(θ) = ϕ(θ; y 0 ) is the canonical parameter R p to be described matches log-likelihood function at y 0, and its first derivative on the sample space, at y 0 by construction implements conditioning on an approximate ancillary statistic contrast with exp fam Nancy Reid Accurate directional inference for vector parameters York University February /29

21 Aside: canonical parameter ϕ(θ) if f (y; θ) is an exponential family, ϕ is sitting in the model if not find a pivotal quantity z i = z i (y i ; θ) with a fixed distribution ( zi define V i = y i ) 1 z i θ y=y 0,θ=ˆθ 0 example: (y i µ)/σ a vector of length p ϕ(θ) = ϕ(θ; y 0 ) = n i=1 l(θ; y 0 ) y i V i = l ;V (θ; y 0 ) Nancy Reid Accurate directional inference for vector parameters York University February /29

22 Testing a value for ψ eliminating nuisance parameter: p d θ = (ψ, λ); ϕ = ϕ(θ) is the canonical parameter of the TEM with ψ fixed by the hypothesis, we can integrate out the nuisance parameter by Laplace approximation define the same plane in the score space L 0 ψ, where ˆϕ ψ is fixed at its observed value f (s; ψ) = c exp{l( ˆϕ 0 ψ ; s) l( ˆϕ(s); s)} j ϕϕ( ˆϕ(s); s) 1/2 j (λλ) ( ˆϕ 0 ψ ; s) 1/2, s L 0 ψ p-value = 1 t d 1 f {s(t); ψ}dt 0 t d 1 f {s(t); ψ}dt s(t) along the line L ψ Nancy Reid Accurate directional inference for vector parameters York University February /29

23 Example: marginal independence y i N q (µ, Σ), H 0 : some entries of Σ are 0 estimate Σ under H 0 using fitcovgraph l(σ) = n 1 2 n 1 [log ϕ(σ) 2 tr {ϕ(σ)s}], ϕ(σ) = Σ 1 S = sample covariance f {s(t); ψ) t ˆΣ + (1 t)ˆσ 0 (n q 2)/2 j (λλ) {ˆΣ 0 ; s(t)} 1/2 j (λλ) needs l(σ)/ (σ jk ) for the non-zero elements j λj λ k {ˆΣ 0 ; s(t)} = n 1 2 A kj = Σ 1 0 ( Σ/ λ k)σ 1 0 ( Σ/ λ j) ( [ tr(a kj + t tr{(a kj + A jk )(ˆΣ 1 ˆΣ ]) 0 I q )} Nancy Reid Accurate directional inference for vector parameters York University February /29

24 ... independence simulations; n = 60; 600, d = 1000 Nominal LRT Directional Skovgaard, Nominal LRT Directional Skovgaard, Nominal LRT Directional Skovgaard, Nominal LRT Directional Skovgaard, Nancy Reid Accurate directional inference for vector parameters York University February /29

25 Conclusion different way to assess vector parameters incorporates information in the direction of departure easy to compute: two model fits, plus 1-d numerical integration accurate conditionally, by construction, and unconditionally can be used in models of practical interest simulations exponential family model not necessary easily generalized using approximate exponential family model Nancy Reid Accurate directional inference for vector parameters York University February /29

26 ... conclusion psi Nancy Reid Accurate directional inference for vector parameters psi1 York University February /29

27 ... conclusion s s1 Nancy Reid Accurate directional inference for vector parameters York University February /29

28 ... conclusion t Nancy Reid Accurate directional inference for vector parameters York University February /29

29 References Fraser, D.A.S. and Massam, H. (1985). Conical tests. Stat. Hefte Skovgaard, I.M. (1988). Saddlepoint expansions for directional test probabilities. JRSS B Skovgaard, I.M. (2001). Likelihood asymptotics. SJS Cheah, P.K., Fraser, D.A.S., Reid, N. (1994). Multiparameter testing in exponential models. Biometrika Davison, A.C., Fraser, D.A.S., Reid, N., Sartori, N. (2014). Accurate directional inference for vector parameters in linear exponential.jasa Fraser, D.A.S., Reid, N., Sartori, N. (2016). Accurate directional inference for vector parameters. submitted Nancy Reid Accurate directional inference for vector parameters York University February /29

Accurate directional inference for vector parameters

Accurate directional inference for vector parameters Accurate directional inference for vector parameters Nancy Reid October 28, 2016 with Don Fraser, Nicola Sartori, Anthony Davison Parametric models and likelihood model f (y; θ), θ R p data y = (y 1,...,

More information

ASSESSING A VECTOR PARAMETER

ASSESSING A VECTOR PARAMETER SUMMARY ASSESSING A VECTOR PARAMETER By D.A.S. Fraser and N. Reid Department of Statistics, University of Toronto St. George Street, Toronto, Canada M5S 3G3 dfraser@utstat.toronto.edu Some key words. Ancillary;

More information

Likelihood inference in the presence of nuisance parameters

Likelihood inference in the presence of nuisance parameters Likelihood inference in the presence of nuisance parameters Nancy Reid, University of Toronto www.utstat.utoronto.ca/reid/research 1. Notation, Fisher information, orthogonal parameters 2. Likelihood inference

More information

DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY

DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY Journal of Statistical Research 200x, Vol. xx, No. xx, pp. xx-xx ISSN 0256-422 X DEFNITIVE TESTING OF AN INTEREST PARAMETER: USING PARAMETER CONTINUITY D. A. S. FRASER Department of Statistical Sciences,

More information

Last week. posterior marginal density. exact conditional density. LTCC Likelihood Theory Week 3 November 19, /36

Last week. posterior marginal density. exact conditional density. LTCC Likelihood Theory Week 3 November 19, /36 Last week Nuisance parameters f (y; ψ, λ), l(ψ, λ) posterior marginal density π m (ψ) =. c (2π) q el P(ψ) l P ( ˆψ) j P ( ˆψ) 1/2 π(ψ, ˆλ ψ ) j λλ ( ˆψ, ˆλ) 1/2 π( ˆψ, ˆλ) j λλ (ψ, ˆλ ψ ) 1/2 l p (ψ) =

More information

Accurate directional inference for vector parameters in linear exponential families

Accurate directional inference for vector parameters in linear exponential families Accurate directional inference for vector parameters in linear exponential families A. C. Davison, D. A. S. Fraser, N. Reid and N. Sartori August 27, 2013 Abstract We consider inference on a vector-valued

More information

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6.

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6. Approximating models Nancy Reid, University of Toronto Oxford, February 6 www.utstat.utoronto.reid/research 1 1. Context Likelihood based inference model f(y; θ), log likelihood function l(θ; y) y = (y

More information

Accurate Directional Inference for Vector Parameters in Linear Exponential Families

Accurate Directional Inference for Vector Parameters in Linear Exponential Families Accurate Directional Inference for Vector Parameters in Linear Exponential Families A. C. DAVISON,D.A.S.FRASER, N.REID, and N. SARTORI Q1 5 10 We consider inference on a vector-valued parameter of interest

More information

Likelihood Inference in the Presence of Nuisance Parameters

Likelihood Inference in the Presence of Nuisance Parameters PHYSTAT2003, SLAC, September 8-11, 2003 1 Likelihood Inference in the Presence of Nuance Parameters N. Reid, D.A.S. Fraser Department of Stattics, University of Toronto, Toronto Canada M5S 3G3 We describe

More information

Approximate Inference for the Multinomial Logit Model

Approximate Inference for the Multinomial Logit Model Approximate Inference for the Multinomial Logit Model M.Rekkas Abstract Higher order asymptotic theory is used to derive p-values that achieve superior accuracy compared to the p-values obtained from traditional

More information

Default priors and model parametrization

Default priors and model parametrization 1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)

More information

Likelihood Inference in the Presence of Nuisance Parameters

Likelihood Inference in the Presence of Nuisance Parameters Likelihood Inference in the Presence of Nuance Parameters N Reid, DAS Fraser Department of Stattics, University of Toronto, Toronto Canada M5S 3G3 We describe some recent approaches to likelihood based

More information

Applied Asymptotics Case studies in higher order inference

Applied Asymptotics Case studies in higher order inference Applied Asymptotics Case studies in higher order inference Nancy Reid May 18, 2006 A.C. Davison, A. R. Brazzale, A. M. Staicu Introduction likelihood-based inference in parametric models higher order approximations

More information

ASYMPTOTICS AND THE THEORY OF INFERENCE

ASYMPTOTICS AND THE THEORY OF INFERENCE ASYMPTOTICS AND THE THEORY OF INFERENCE N. Reid University of Toronto Abstract Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution

More information

Bayesian and frequentist inference

Bayesian and frequentist inference Bayesian and frequentist inference Nancy Reid March 26, 2007 Don Fraser, Ana-Maria Staicu Overview Methods of inference Asymptotic theory Approximate posteriors matching priors Examples Logistic regression

More information

Nancy Reid SS 6002A Office Hours by appointment

Nancy Reid SS 6002A Office Hours by appointment Nancy Reid SS 6002A reid@utstat.utoronto.ca Office Hours by appointment Light touch assessment One or two problems assigned weekly graded during Reading Week http://www.utstat.toronto.edu/reid/4508s14.html

More information

Nancy Reid SS 6002A Office Hours by appointment

Nancy Reid SS 6002A Office Hours by appointment Nancy Reid SS 6002A reid@utstat.utoronto.ca Office Hours by appointment Problems assigned weekly, due the following week http://www.utstat.toronto.edu/reid/4508s16.html Various types of likelihood 1. likelihood,

More information

The formal relationship between analytic and bootstrap approaches to parametric inference

The formal relationship between analytic and bootstrap approaches to parametric inference The formal relationship between analytic and bootstrap approaches to parametric inference T.J. DiCiccio Cornell University, Ithaca, NY 14853, U.S.A. T.A. Kuffner Washington University in St. Louis, St.

More information

Default priors for Bayesian and frequentist inference

Default priors for Bayesian and frequentist inference Default priors for Bayesian and frequentist inference D.A.S. Fraser and N. Reid University of Toronto, Canada E. Marras Centre for Advanced Studies and Development, Sardinia University of Rome La Sapienza,

More information

hoa: An R Package Bundle for Higher Order Likelihood Inference

hoa: An R Package Bundle for Higher Order Likelihood Inference hoa: An R Package Bundle for Higher Order Likelihood Inference by Alessandra R. Brazzale Rnews, 5/1 May 2005, pp. 20 27 Introduction The likelihood function represents the basic ingredient of many commonly

More information

COMBINING p-values: A DEFINITIVE PROCESS. Galley

COMBINING p-values: A DEFINITIVE PROCESS. Galley 0 Journal of Statistical Research ISSN 0 - X 00, Vol., No., pp. - Bangladesh COMBINING p-values: A DEFINITIVE PROCESS D.A.S. Fraser Department of Statistics, University of Toronto, Toronto, Canada MS G

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Likelihood and Asymptotic Theory for Statistical Inference

Likelihood and Asymptotic Theory for Statistical Inference Likelihood and Asymptotic Theory for Statistical Inference Nancy Reid 020 7679 1863 reid@utstat.utoronto.ca n.reid@ucl.ac.uk http://www.utstat.toronto.edu/reid/ltccf12.html LTCC Likelihood Theory Week

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

PARAMETER CURVATURE REVISITED AND THE BAYES-FREQUENTIST DIVERGENCE.

PARAMETER CURVATURE REVISITED AND THE BAYES-FREQUENTIST DIVERGENCE. Journal of Statistical Research 200x, Vol. xx, No. xx, pp. xx-xx Bangladesh ISSN 0256-422 X PARAMETER CURVATURE REVISITED AND THE BAYES-FREQUENTIST DIVERGENCE. A.M. FRASER Department of Mathematics, University

More information

Improved Inference for First Order Autocorrelation using Likelihood Analysis

Improved Inference for First Order Autocorrelation using Likelihood Analysis Improved Inference for First Order Autocorrelation using Likelihood Analysis M. Rekkas Y. Sun A. Wong Abstract Testing for first-order autocorrelation in small samples using the standard asymptotic test

More information

Nuisance parameters and their treatment

Nuisance parameters and their treatment BS2 Statistical Inference, Lecture 2, Hilary Term 2008 April 2, 2008 Ancillarity Inference principles Completeness A statistic A = a(x ) is said to be ancillary if (i) The distribution of A does not depend

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Likelihood and Asymptotic Theory for Statistical Inference

Likelihood and Asymptotic Theory for Statistical Inference Likelihood and Asymptotic Theory for Statistical Inference Nancy Reid 020 7679 1863 reid@utstat.utoronto.ca n.reid@ucl.ac.uk http://www.utstat.toronto.edu/reid/ltccf12.html LTCC Likelihood Theory Week

More information

Measuring nuisance parameter effects in Bayesian inference

Measuring nuisance parameter effects in Bayesian inference Measuring nuisance parameter effects in Bayesian inference Alastair Young Imperial College London WHOA-PSI-2017 1 / 31 Acknowledgements: Tom DiCiccio, Cornell University; Daniel Garcia Rasines, Imperial

More information

Bootstrap and Parametric Inference: Successes and Challenges

Bootstrap and Parametric Inference: Successes and Challenges Bootstrap and Parametric Inference: Successes and Challenges G. Alastair Young Department of Mathematics Imperial College London Newton Institute, January 2008 Overview Overview Review key aspects of frequentist

More information

ANCILLARY STATISTICS: A REVIEW

ANCILLARY STATISTICS: A REVIEW Statistica Sinica 20 (2010), 1309-1332 ANCILLARY STATISTICS: A REVIEW M. Ghosh 1, N. Reid 2 and D. A. S. Fraser 2 1 University of Florida and 2 University of Toronto Abstract: In a parametric statistical

More information

Conditional Inference by Estimation of a Marginal Distribution

Conditional Inference by Estimation of a Marginal Distribution Conditional Inference by Estimation of a Marginal Distribution Thomas J. DiCiccio and G. Alastair Young 1 Introduction Conditional inference has been, since the seminal work of Fisher (1934), a fundamental

More information

ANCILLARY STATISTICS: A REVIEW

ANCILLARY STATISTICS: A REVIEW 1 ANCILLARY STATISTICS: A REVIEW M. Ghosh, N. Reid and D.A.S. Fraser University of Florida and University of Toronto Abstract: In a parametric statistical model, a function of the data is said to be ancillary

More information

Research Article Inference for the Sharpe Ratio Using a Likelihood-Based Approach

Research Article Inference for the Sharpe Ratio Using a Likelihood-Based Approach Journal of Probability and Statistics Volume 202 Article ID 87856 24 pages doi:0.55/202/87856 Research Article Inference for the Sharpe Ratio Using a Likelihood-Based Approach Ying Liu Marie Rekkas 2 and

More information

Principles of Statistical Inference

Principles of Statistical Inference Principles of Statistical Inference Nancy Reid and David Cox August 30, 2013 Introduction Statistics needs a healthy interplay between theory and applications theory meaning Foundations, rather than theoretical

More information

Principles of Statistical Inference

Principles of Statistical Inference Principles of Statistical Inference Nancy Reid and David Cox August 30, 2013 Introduction Statistics needs a healthy interplay between theory and applications theory meaning Foundations, rather than theoretical

More information

An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models

An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models An Improved Specification Test for AR(1) versus MA(1) Disturbances in Linear Regression Models Pierre Nguimkeu Georgia State University Abstract This paper proposes an improved likelihood-based method

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

Exponential Models: Approximations for Probabilities

Exponential Models: Approximations for Probabilities JIRSS (2011) Vol. 10, No. 2, pp 95-107 Exponential Models: Approximations for Probabilities D. A. S. Fraser 1,2,A.Naderi 3, Kexin Ji 1,WeiLin 1, Jie Su 1 1 Department of Statistics, University of Toronto,

More information

Third-order inference for autocorrelation in nonlinear regression models

Third-order inference for autocorrelation in nonlinear regression models Third-order inference for autocorrelation in nonlinear regression models P. E. Nguimkeu M. Rekkas Abstract We propose third-order likelihood-based methods to derive highly accurate p-value approximations

More information

CONVERTING OBSERVED LIKELIHOOD FUNCTIONS TO TAIL PROBABILITIES. D.A.S. Fraser Mathematics Department York University North York, Ontario M3J 1P3

CONVERTING OBSERVED LIKELIHOOD FUNCTIONS TO TAIL PROBABILITIES. D.A.S. Fraser Mathematics Department York University North York, Ontario M3J 1P3 CONVERTING OBSERVED LIKELIHOOD FUNCTIONS TO TAIL PROBABILITIES D.A.S. Fraser Mathematics Department York University North York, Ontario M3J 1P3 N. Reid Department of Statistics University of Toronto Toronto,

More information

Loglikelihood and Confidence Intervals

Loglikelihood and Confidence Intervals Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,

More information

Likelihood and p-value functions in the composite likelihood context

Likelihood and p-value functions in the composite likelihood context Likelihood and p-value functions in the composite likelihood context D.A.S. Fraser and N. Reid Department of Statistical Sciences University of Toronto November 19, 2016 Abstract The need for combining

More information

Stat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota

Stat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota Stat 5102 Lecture Slides Deck 3 Charles J. Geyer School of Statistics University of Minnesota 1 Likelihood Inference We have learned one very general method of estimation: method of moments. the Now we

More information

Likelihood based Statistical Inference. Dottorato in Economia e Finanza Dipartimento di Scienze Economiche Univ. di Verona

Likelihood based Statistical Inference. Dottorato in Economia e Finanza Dipartimento di Scienze Economiche Univ. di Verona Likelihood based Statistical Inference Dottorato in Economia e Finanza Dipartimento di Scienze Economiche Univ. di Verona L. Pace, A. Salvan, N. Sartori Udine, April 2008 Likelihood: observed quantities,

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

Aspects of Likelihood Inference

Aspects of Likelihood Inference Submitted to the Bernoulli Aspects of Likelihood Inference NANCY REID 1 1 Department of Statistics University of Toronto 100 St. George St. Toronto, Canada M5S 3G3 E-mail: reid@utstat.utoronto.ca, url:

More information

Generalized Linear Models Introduction

Generalized Linear Models Introduction Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,

More information

Modern likelihood inference for the parameter of skewness: An application to monozygotic

Modern likelihood inference for the parameter of skewness: An application to monozygotic Working Paper Series, N. 10, December 2013 Modern likelihood inference for the parameter of skewness: An application to monozygotic twin studies Mameli Valentina Department of Mathematics and Computer

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

2.6.3 Generalized likelihood ratio tests

2.6.3 Generalized likelihood ratio tests 26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Outline of GLMs. Definitions

Outline of GLMs. Definitions Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density

More information

Marginal Posterior Simulation via Higher-order Tail Area Approximations

Marginal Posterior Simulation via Higher-order Tail Area Approximations Bayesian Analysis (2014) 9, Number 1, pp. 129 146 Marginal Posterior Simulation via Higher-order Tail Area Approximations Erlis Ruli, Nicola Sartori and Laura Ventura Abstract. A new method for posterior

More information

PRINCIPLES OF STATISTICAL INFERENCE

PRINCIPLES OF STATISTICAL INFERENCE Advanced Series on Statistical Science & Applied Probability PRINCIPLES OF STATISTICAL INFERENCE from a Neo-Fisherian Perspective Luigi Pace Department of Statistics University ofudine, Italy Alessandra

More information

Integrated likelihoods in survival models for highlystratified

Integrated likelihoods in survival models for highlystratified Working Paper Series, N. 1, January 2014 Integrated likelihoods in survival models for highlystratified censored data Giuliana Cortese Department of Statistical Sciences University of Padua Italy Nicola

More information

Answer Key for STAT 200B HW No. 7

Answer Key for STAT 200B HW No. 7 Answer Key for STAT 200B HW No. 7 May 5, 2007 Problem 2.2 p. 649 Assuming binomial 2-sample model ˆπ =.75, ˆπ 2 =.6. a ˆτ = ˆπ 2 ˆπ =.5. From Ex. 2.5a on page 644: ˆπ ˆπ + ˆπ 2 ˆπ 2.75.25.6.4 = + =.087;

More information

Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors.

Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors. Staicu, A-M., & Reid, N. (2007). On the uniqueness of probability matching priors. Early version, also known as pre-print Link to publication record in Explore Bristol Research PDF-document University

More information

Chapter 4: Factor Analysis

Chapter 4: Factor Analysis Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.

More information

Recap. Vector observation: Y f (y; θ), Y Y R m, θ R d. sample of independent vectors y 1,..., y n. pairwise log-likelihood n m. weights are often 1

Recap. Vector observation: Y f (y; θ), Y Y R m, θ R d. sample of independent vectors y 1,..., y n. pairwise log-likelihood n m. weights are often 1 Recap Vector observation: Y f (y; θ), Y Y R m, θ R d sample of independent vectors y 1,..., y n pairwise log-likelihood n m i=1 r=1 s>r w rs log f 2 (y ir, y is ; θ) weights are often 1 more generally,

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

The Surprising Conditional Adventures of the Bootstrap

The Surprising Conditional Adventures of the Bootstrap The Surprising Conditional Adventures of the Bootstrap G. Alastair Young Department of Mathematics Imperial College London Inaugural Lecture, 13 March 2006 Acknowledgements Early influences: Eric Renshaw,

More information

Lecture 26: Likelihood ratio tests

Lecture 26: Likelihood ratio tests Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for

More information

Likelihood Ratio tests

Likelihood Ratio tests Likelihood Ratio tests For general composite hypotheses optimality theory is not usually successful in producing an optimal test. instead we look for heuristics to guide our choices. The simplest approach

More information

Improved Inference for Moving Average Disturbances in Nonlinear Regression Models

Improved Inference for Moving Average Disturbances in Nonlinear Regression Models Improved Inference for Moving Average Disturbances in Nonlinear Regression Models Pierre Nguimkeu Georgia State University November 22, 2013 Abstract This paper proposes an improved likelihood-based method

More information

Integrated likelihoods in models with stratum nuisance parameters

Integrated likelihoods in models with stratum nuisance parameters Riccardo De Bin, Nicola Sartori, Thomas A. Severini Integrated likelihoods in models with stratum nuisance parameters Technical Report Number 157, 2014 Department of Statistics University of Munich http://www.stat.uni-muenchen.de

More information

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods PhD School in Statistics cycle XXVI, 2011 Theory and Methods of Statistical Inference PART I Frequentist theory and methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution

More information

Chapter 17: Undirected Graphical Models

Chapter 17: Undirected Graphical Models Chapter 17: Undirected Graphical Models The Elements of Statistical Learning Biaobin Jiang Department of Biological Sciences Purdue University bjiang@purdue.edu October 30, 2014 Biaobin Jiang (Purdue)

More information

Answer Key for STAT 200B HW No. 8

Answer Key for STAT 200B HW No. 8 Answer Key for STAT 200B HW No. 8 May 8, 2007 Problem 3.42 p. 708 The values of Ȳ for x 00, 0, 20, 30 are 5/40, 0, 20/50, and, respectively. From Corollary 3.5 it follows that MLE exists i G is identiable

More information

Sampling distribution of GLM regression coefficients

Sampling distribution of GLM regression coefficients Sampling distribution of GLM regression coefficients Patrick Breheny February 5 Patrick Breheny BST 760: Advanced Regression 1/20 Introduction So far, we ve discussed the basic properties of the score,

More information

Tests Concerning Equicorrelation Matrices with Grouped Normal Data

Tests Concerning Equicorrelation Matrices with Grouped Normal Data Tests Concerning Equicorrelation Matrices with Grouped Normal Data Paramjit S Gill Department of Mathematics and Statistics Okanagan University College Kelowna, BC, Canada, VV V7 pgill@oucbcca Sarath G

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

1 Mixed effect models and longitudinal data analysis

1 Mixed effect models and longitudinal data analysis 1 Mixed effect models and longitudinal data analysis Mixed effects models provide a flexible approach to any situation where data have a grouping structure which introduces some kind of correlation between

More information

Theory and Methods of Statistical Inference

Theory and Methods of Statistical Inference PhD School in Statistics cycle XXIX, 2014 Theory and Methods of Statistical Inference Instructors: B. Liseo, L. Pace, A. Salvan (course coordinator), N. Sartori, A. Tancredi, L. Ventura Syllabus Some prerequisites:

More information

9.1 Orthogonal factor model.

9.1 Orthogonal factor model. 36 Chapter 9 Factor Analysis Factor analysis may be viewed as a refinement of the principal component analysis The objective is, like the PC analysis, to describe the relevant variables in study in terms

More information

Composite likelihood methods

Composite likelihood methods 1 / 20 Composite likelihood methods Nancy Reid University of Warwick, April 15, 2008 Cristiano Varin Grace Yun Yi, Zi Jin, Jean-François Plante 2 / 20 Some questions (and answers) Is there a drinks table

More information

More on nuisance parameters

More on nuisance parameters BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Theory and Methods of Statistical Inference. PART I Frequentist likelihood methods

Theory and Methods of Statistical Inference. PART I Frequentist likelihood methods PhD School in Statistics XXV cycle, 2010 Theory and Methods of Statistical Inference PART I Frequentist likelihood methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution

More information

Chapter 1 Likelihood-Based Inference and Finite-Sample Corrections: A Brief Overview

Chapter 1 Likelihood-Based Inference and Finite-Sample Corrections: A Brief Overview Chapter 1 Likelihood-Based Inference and Finite-Sample Corrections: A Brief Overview Abstract This chapter introduces the likelihood function and estimation by maximum likelihood. Some important properties

More information

Sample size determination for logistic regression: A simulation study

Sample size determination for logistic regression: A simulation study Sample size determination for logistic regression: A simulation study Stephen Bush School of Mathematical Sciences, University of Technology Sydney, PO Box 123 Broadway NSW 2007, Australia Abstract This

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

TAMS39 Lecture 10 Principal Component Analysis Factor Analysis

TAMS39 Lecture 10 Principal Component Analysis Factor Analysis TAMS39 Lecture 10 Principal Component Analysis Factor Analysis Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content - Lecture Principal component analysis

More information

Semiparametric Gaussian Copula Models: Progress and Problems

Semiparametric Gaussian Copula Models: Progress and Problems Semiparametric Gaussian Copula Models: Progress and Problems Jon A. Wellner University of Washington, Seattle 2015 IMS China, Kunming July 1-4, 2015 2015 IMS China Meeting, Kunming Based on joint work

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Likelihood-based inference with missing data under missing-at-random

Likelihood-based inference with missing data under missing-at-random Likelihood-based inference with missing data under missing-at-random Jae-kwang Kim Joint work with Shu Yang Department of Statistics, Iowa State University May 4, 014 Outline 1. Introduction. Parametric

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Bayesian inference for factor scores

Bayesian inference for factor scores Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor

More information

Composite Likelihood

Composite Likelihood Composite Likelihood Nancy Reid January 30, 2012 with Cristiano Varin and thanks to Don Fraser, Grace Yi, Ximing Xu Background parametric model f (y; θ), y R m ; θ R d likelihood function L(θ; y) f (y;

More information

Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III)

Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Florian Pelgrin HEC September-December 2010 Florian Pelgrin (HEC) Constrained estimators September-December

More information

Better Bootstrap Confidence Intervals

Better Bootstrap Confidence Intervals by Bradley Efron University of Washington, Department of Statistics April 12, 2012 An example Suppose we wish to make inference on some parameter θ T (F ) (e.g. θ = E F X ), based on data We might suppose

More information

Generalized Linear Models. Kurt Hornik

Generalized Linear Models. Kurt Hornik Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general

More information

2018 2019 1 9 sei@mistiu-tokyoacjp http://wwwstattu-tokyoacjp/~sei/lec-jhtml 11 552 3 0 1 2 3 4 5 6 7 13 14 33 4 1 4 4 2 1 1 2 2 1 1 12 13 R?boxplot boxplotstats which does the computation?boxplotstats

More information

Akaike Information Criterion

Akaike Information Criterion Akaike Information Criterion Shuhua Hu Center for Research in Scientific Computation North Carolina State University Raleigh, NC February 7, 2012-1- background Background Model statistical model: Y j =

More information

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test. Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Chapter 11. Hypothesis Testing (II)

Chapter 11. Hypothesis Testing (II) Chapter 11. Hypothesis Testing (II) 11.1 Likelihood Ratio Tests one of the most popular ways of constructing tests when both null and alternative hypotheses are composite (i.e. not a single point). Let

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information