Quantile Regression: Inference

Similar documents
Quantile Regression: Inference

ECAS Summer Course. Fundamentals of Quantile Regression Roger Koenker University of Illinois at Urbana-Champaign

Improving linear quantile regression for

Quantile Regression Computation: From the Inside and the Outside

An Encompassing Test for Non-Nested Quantile Regression Models

Asymmetric least squares estimation and testing

ECAS Summer Course. Quantile Regression for Longitudinal Data. Roger Koenker University of Illinois at Urbana-Champaign

Quantile Regression Methods for Reference Growth Charts

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Quantile Regression Computation: From Outside, Inside and the Proximal

CLUSTER-ROBUST BOOTSTRAP INFERENCE IN QUANTILE REGRESSION MODELS. Andreas Hagemann. July 3, 2015

A Resampling Method on Pivotal Estimating Functions

where x i and u i are iid N (0; 1) random variates and are mutually independent, ff =0; and fi =1. ff(x i )=fl0 + fl1x i with fl0 =1. We examine the e

Nonparametric tests. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 704: Data Analysis I

Quantile Regression for Dynamic Panel Data

L-momenty s rušivou regresí

For right censored data with Y i = T i C i and censoring indicator, δ i = I(T i < C i ), arising from such a parametric model we have the likelihood,

Minimum distance tests and estimates based on ranks

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Expecting the Unexpected: Uniform Quantile Regression Bands with an application to Investor Sentiments

Quantile regression and heteroskedasticity

Binary choice 3.3 Maximum likelihood estimation

Linear models and their mathematical foundations: Simple linear regression

Inference on distributions and quantiles using a finite-sample Dirichlet process

Statistics 135 Fall 2008 Final Exam

simple if it completely specifies the density of x

STAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)

Lecture 32: Asymptotic confidence sets and likelihoods

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be

Nonparametric Quantile Regression

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM

Master s Written Examination

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 15. Hypothesis testing in the linear model

Asymptotic Statistics-III. Changliang Zou

Understanding Regressions with Observations Collected at High Frequency over Long Span

Estimation and Inference of Quantile Regression. for Survival Data under Biased Sampling

Chapter 7. Hypothesis Testing

3 Multiple Linear Regression

Convergence of Multivariate Quantile Surfaces

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Resampling Methods. Lukas Meier

Contents 1. Contents

Lecture 11 Weak IV. Econ 715

Quantile Processes for Semi and Nonparametric Regression

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Quantile Regression for Longitudinal Data

One-Sample Numerical Data

Regression and Statistical Inference

δ -method and M-estimation

Estimation, admissibility and score functions

Nonparametric Tests for Multi-parameter M-estimators

Session 3 The proportional odds model and the Mann-Whitney test

SOME NOTES ON HOTELLING TUBES

Instrumental Variables Estimation and Weak-Identification-Robust. Inference Based on a Conditional Quantile Restriction

MS&E 226: Small Data

INFERENCE ON QUANTILE REGRESSION FOR HETEROSCEDASTIC MIXED MODELS

Lehrstuhl für Statistik und Ökonometrie. Diskussionspapier 87 / Some critical remarks on Zhang s gamma test for independence

Testing linearity against threshold effects: uniform inference in quantile regression

Non-parametric Inference and Resampling

Penalty Methods for Bivariate Smoothing and Chicago Land Values

Nonparametric Location Tests: k-sample

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Independent Component (IC) Models: New Extensions of the Multinormal Model

Introduction to Quantile Regression

Chapter 9: Hypothesis Testing Sections

GARCH Models Estimation and Inference

Economics 536 Lecture 21 Counts, Tobit, Sample Selection, and Truncation

A Very Brief Summary of Statistical Inference, and Examples

Quantile Regression for Panel/Longitudinal Data

Advanced Statistics II: Non Parametric Tests

Quantile Regression for Extraordinarily Large Data

5 Introduction to the Theory of Order Statistics and Rank Statistics

Lecture 10 Maximum Likelihood Asymptotics under Non-standard Conditions: A Heuristic Introduction to Sandwiches

Design of the Fuzzy Rank Tests Package

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Tentative solutions TMA4255 Applied Statistics 16 May, 2015

Lawrence D. Brown* and Daniel McCarthy*

Stat 710: Mathematical Statistics Lecture 31

Bayesian spatial quantile regression

1 Hypothesis testing for a single mean

The Numerical Delta Method and Bootstrap

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Quantile Regression for Residual Life and Empirical Likelihood

Quick Review on Linear Multiple Regression

STAT 135 Lab 8 Hypothesis Testing Review, Mann-Whitney Test by Normal Approximation, and Wilcoxon Signed Rank Test.

Lecture 26: Likelihood ratio tests

Summary of Chapters 7-9

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf

Inference After Variable Selection

3 Joint Distributions 71

DA Freedman Notes on the MLE Fall 2003

Combining Model and Test Data for Optimal Determination of Percentiles and Allowables: CVaR Regression Approach, Part I

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction

Statistics 203: Introduction to Regression and Analysis of Variance Course review

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics

STA 732: Inference. Notes 4. General Classical Methods & Efficiency of Tests B&D 2, 4, 5

Transcription:

Quantile Regression: Inference Roger Koenker University of Illinois, Urbana-Champaign Aarhus: 21 June 2010 Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 1 / 28

Inference for Quantile Regression Asymptotics of the Sample Quantiles QR Asymptotics in iid Error Models QR Asymptotics in Heteroscedastic Error Models Classical Rank Tests and the Quantile Regression Dual Inference on the Quantile Regression Process Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 2 / 28

Asymptotics for the Sample Quantiles Minimizing n i=1 ρ τ(y i ξ) consider g n (ξ) = n 1 n i=1 ψ τ (y i ξ) = n 1 By convexity of the objective function, n i=1 {ˆξ τ > ξ} {g n (ξ) < 0} and the DeMoivre-Laplace CLT yields, expanding F, n(ˆξ τ ξ) N(0, ω 2 (τ, F)) (I(y i < ξ) τ). where ω 2 (τ, F) = τ(1 τ)/f 2 (F 1 (τ)). Classical Bahadur-Kiefer representation theory provides further refinement of this result. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 3 / 28

Some Gory Details Instead of a fixed ξ = F 1 (τ) consider, P{ˆξ n > ξ + δ/ n} = P{g n (ξ + δ/ n) < 0} where g n g n (ξ + δ/ n) is a sum of iid terms with n Eg n = En 1 (I(y i < ξ + δ/ n) τ) i=1 = F(ξ + δ/ n) τ = f(ξ)δ/ n + o(n 1/2 ) µ n δ + o(n 1/2 ) Vg n = τ(1 τ)/n + o(n 1 ) σ 2 n + o(n 1 ). Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 4 / 28

Some Gory Details Instead of a fixed ξ = F 1 (τ) consider, P{ˆξ n > ξ + δ/ n} = P{g n (ξ + δ/ n) < 0} where g n g n (ξ + δ/ n) is a sum of iid terms with n Eg n = En 1 (I(y i < ξ + δ/ n) τ) i=1 = F(ξ + δ/ n) τ = f(ξ)δ/ n + o(n 1/2 ) µ n δ + o(n 1/2 ) Vg n = τ(1 τ)/n + o(n 1 ) σ 2 n + o(n 1 ). Thus, by (a triangular array form of) the DeMoivre-Laplace CLT, P( n(ˆξ n ξ) > δ) = Φ((0 µ n δ)/σ n ) 1 Φ(ω 1 δ) where ω = µ n /σ n = τ(1 τ)/f(f 1 (τ)). Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 4 / 28

Finite Sample Theory for Quantile Regression Let h H index the ( n p) p-element subsets of {1, 2,..., n} and X(h), y(h) denote corresponding submatrices and vectors of X and y. Lemma: ˆβ = b(h) X(h) 1 y(h) is the τth regression quantile iff ξ h C where ξ h = ψ τ (y i x i ˆβ)x i X(h) 1, i/ h C = [τ 1, τ] p, and ψ τ (u) = τ I(u < 0). Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 5 / 28

Finite Sample Theory for Quantile Regression Let h H index the ( n p) p-element subsets of {1, 2,..., n} and X(h), y(h) denote corresponding submatrices and vectors of X and y. Lemma: ˆβ = b(h) X(h) 1 y(h) is the τth regression quantile iff ξ h C where ξ h = ψ τ (y i x i ˆβ)x i X(h) 1, i/ h C = [τ 1, τ] p, and ψ τ (u) = τ I(u < 0). Theorem: (KB, 1978) In the linear model with iid errors, {u i } F, f, the density of ˆβ(τ) is given by g(b) = h H i h f(x i (b β(τ)) + F 1 (τ)) P(ξ h (b) C) det(x(h)) Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 5 / 28

Finite Sample Theory for Quantile Regression Let h H index the ( n p) p-element subsets of {1, 2,..., n} and X(h), y(h) denote corresponding submatrices and vectors of X and y. Lemma: ˆβ = b(h) X(h) 1 y(h) is the τth regression quantile iff ξ h C where ξ h = ψ τ (y i x i ˆβ)x i X(h) 1, i/ h C = [τ 1, τ] p, and ψ τ (u) = τ I(u < 0). Theorem: (KB, 1978) In the linear model with iid errors, {u i } F, f, the density of ˆβ(τ) is given by g(b) = h H i h f(x i (b β(τ)) + F 1 (τ)) P(ξ h (b) C) det(x(h)) Asymptotic behavior of ˆβ(τ) follows by (painful) consideration of the limiting form of this density, see also Knight and Goh (ET, 2009). Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 5 / 28

Asymptotic Theory of Quantile Regression I In the classical linear model, y i = x i β + u i with u i iid from dff, with density f(u) > 0 on its support {u 0 < F(u) < 1}, the joint distribution of n(ˆβ n (τ i ) β(τ i )) m i=1 is asymptotically normal with mean 0 and covariance matrix Ω D 1. Here β(τ) = β + F 1 u (τ)e 1, e 1 = (1, 0,..., 0), x 1i 1, n 1 x i x i D, a positive definite matrix, and Ω = ((τ i τ j τ i τ j )/(f(f 1 (τ i ))f(f 1 (τ j ))) m i,j=1. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 6 / 28

Asymptotic Theory of Quantile Regression II When the response is conditionally independent over i, but not identically distributed, the asymptotic covariance matrix of ζ(τ) = n(ˆβ(τ) β(τ)) is somewhat more complicated. Let ξ i (τ) = x i β(τ), f i ( ) denote the corresponding conditional density, and define, n J n (τ 1, τ 2 ) = (τ 1 τ 2 τ 1 τ 2 )n 1 x i x i, i=1 H n (τ) = n 1 x i x i f i(ξ i (τ)). Under mild regularity conditions on the {f i } s and {x i } s, we have joint asymptotic normality for (ζ(τ i ),..., ζ(τ m )) with covariance matrix V n = (H n (τ i ) 1 J n (τ i, τ j )H n (τ j ) 1 ) m i,j=1. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 7 / 28

Making Sandwiches The crucial ingredient of the QR Sandwich is the quantile density function f i (ξ i (τ)), which can be estimated by a difference quotient. Differentiating the identity: F(Q(t)) = t we get s(t) = dq(t) dt = 1 f(q(t)) sometimes called the sparsity function so we can compute ˆf i (x i ˆβ(τ)) = 2h n /(x i (ˆβ(τ + h n ) ˆβ(τ h n )) with h n = O(n 1/3 ). Prudence suggests a modified version: f i (x i ˆβ(τ)) = max{0, ˆf i (x i ˆβ(τ))} Various other strategies can be employed including a variety of bootstrapping options. More on this in the first lab session. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 8 / 28

Rank Based Inference for Quantile Regression Ranks play a fundamental dual role in QR inference. Classical rank tests for the p-sample problem extended to regression Rank tests play the role of Rao (score) tests for QR. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 9 / 28

Two Sample Location-Shift Model X 1,..., X n F(x) Y 1,..., Y m F(x θ) Controls Treatments Hypothesis: H 0 : θ = 0 H 1 : θ > 0 The Gaussian Model F = Φ T = (Ȳ m X n )/ n 1 + m 1 UMP Tests: critical region {T > Φ 1 (1 α)} Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 10 / 28

Wilcoxon-Mann-Whitney Rank Test Mann-Whitney Form: S = n i=1 j=1 m I(Y j > X i ) Heuristic: If treatment responses are larger than controls for most pairs (i, j), then H 0 should be rejected. Wilcoxon Form: Set (R 1,..., R n+m ) = Rank(Y 1,..., Y m, X 1,... X n ), W = Proposition: S = W m(m + 1)/2 so Wilcoxon and Mann-Whitney tests are equivalent. m j=1 R j Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 11 / 28

Pros and Cons of the Transformation to Ranks Thought One: Gain: Null Distribution is independent of F. Loss: Cardinal information about data. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 12 / 28

Pros and Cons of the Transformation to Ranks Thought One: Gain: Null Distribution is independent of F. Loss: Cardinal information about data. Thought Two: Gain: Student t-test has quite accurate size provided σ 2 (F) <. Loss: Student t-test uses cardinal information badly for long-tailed F. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 12 / 28

Asymptotic Relative Efficiency of Wilcoxon versus Student t-test Pitman (Local) Alternatives: H n : θ n = θ 0 / n (t-test) 2 χ 2 1 (θ2 0 /σ2 (F)) (Wilcoxon) 2 χ 2 1 (12θ2 0 ( f 2 ) 2 ) ARE(W, t, F) = 12σ 2 (F)[ f 2 (x)dx] 2 F N U Logistic DExp LogN t 2 ARE.955 1.0 1.1 1.5 7.35 Theorem (Hodges-Lehmann) For all F, ARE(W, t, F).864. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 13 / 28

Hájek s Rankscore Generating Functions Let Y 1,..., Y n be a random sample from an absolutely continuous df F with associated ranks R 1,..., R n, Hájek s rank generating functions are: 1 if t (R i 1)/n â i (t) = R i tn if (R i 1)/n t R i /n 0 if R i /n t 0.0 0.4 0.8 τ Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 14 / 28

Linear Rank Statistics Asymptotics Theorem (Hájek (1965)) Let c n = (c 1n,..., c nn ) be a triangular array of real numbers such that Then max i Z n (t) = ( (c in c n ) 2 / n (c in c n ) 2 0. i=1 n n (c in c n ) 2 ) 1/2 (c jn c n )â j (t) i=1 n w j â j (t) j=1 converges weakly to a Brownian Bridge, i.e., a Gaussian process on [0, 1] with mean zero and covariance function Cov(Z(s), Z(t)) = s t st. j=1 Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 15 / 28

Some Asymptotic Heuristics The Hájek functions are approximately indicator functions â i (t) I(Y i > F 1 (t)) = I(F(Y i ) > t) Since F(Y i ) U[0, 1], linear rank statistics may be represented as 1 0 â i (t)dϕ(t) 1 0 I(F(Y i ) > t)dϕ(t) = ϕ(f(y i )) ϕ(0) 1 0 Z n (t)dϕ(t) = w i â i (t)dϕ(t) = w i ϕ(f(y i )) + o p (1), which is asymptotically distribution free, i.e. independent of F. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 16 / 28

Duality of Ranks and Quantiles Quantiles may be defined as ˆξ(τ) = argmin ρ τ (y i ξ) where ρ τ (u) = u(τ I(u < 0)). This can be formulated as a linear program whose dual solution â(τ) = argmax{y a 1 na = (1 τ)n, a [0, 1] n } generates the Hájek rankscore functions. Reference: Gutenbrunner and Jurečková (1992). Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 17 / 28

Regression Quantiles and Rank Scores: ˆβ n (τ) = argmin b R p ρτ (y i x i b) â n (τ) = argmax a [0,1] n{y a X a = (1 τ)x 1 n } x ˆβ n (τ) Estimates Q Y (τ x) Piecewise constant on [0, 1]. For X = 1 n, ˆβ n (τ) = ˆF 1 n (τ). {â i (τ)} n i=1 Regression rankscore functions Piecewise linear on [0, 1]. For X = 1 n, â i (τ) are Hajek rank generating functions. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 18 / 28

Regression Rankscore Residuals The Wilcoxon rankscores, ũ i = 1 0 â i (t)dt play the role of quantile regression residuals. For each observation y i they answer the question: on which quantile does y i lie? The ũ i satisfy an orthogonality restriction: 1 X ũ = X â(t)dt = n x 0 1 0 (1 t)dt = n x/2. This is something like the X û = 0 condition for OLS. Note that if the X is centered then x = (1, 0,, 0). The ũ vector is approximately uniformly distributed; in the one-sample setting u i = (R i + 1/2)/n so they are obviously too uniform. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 19 / 28

Regression Rank Tests Y = X β + Z γ + u H 0 : γ = 0 versus H n : γ = γ 0 / n Given the regression rank score process for the restricted model, } â n (τ) = argmax {Y a X a = (1 τ)x 1 n A test of H 0 is based on the linear rank statistics, ˆb n = 1 0 â n (t) dϕ(t) Choice of the score function ϕ permits test of location, scale or (potentially) other effects. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 20 / 28

Regression Rankscore Tests Theorem: (Gutenbrunner, Jurečková, Koenker and Portnoy) Under H n and regularity conditions, the test statistic T n = S nq 1 n S n where S n = (Z Ẑ) ˆb n, Ẑ = X(X X) 1 X Z, Q n = n 1 (Z Ẑ) Z Ẑ) where T n χ 2 q(η) η 2 = ω 2 (ϕ, F)γ 0 Qγ 0 ω(ϕ, F) = 1 0 f(f 1 (t)) dϕ(t) Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 21 / 28

Regression Rankscores for Stackloss Data Obs No 1 rank= 0.18 Obs No 4 rank= 0.46 Obs No 7 rank= 0.23 0.0 0.4 0.8 Obs No 2 rank= 0.02 0.0 0.4 0.8 Obs No 3 rank= 0.35 0.0 0.4 0.8 0.0 0.4 0.8 Obs No 5 rank= 0.2 0.0 0.4 0.8 0.0 0.4 0.8 0.0 0.4 0.8 Obs No 8 rank= 0.02 0.0 0.4 0.8 Obs No 6 rank= 0.33 Obs No 9 rank= 0.44 0.0 0.4 0.8 Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 22 / 28

Regression Rankscores for Stackloss Data Obs No 10 rank= 0.11 Obs No 14 rank= 0.18 Obs No 18 rank= 0.07 0.0 1.0 Obs No 11 rank= 0.24 0.0 1.0 Obs No 12 rank= 0.19 0.0 1.0 Obs No 13 rank= 0.31 0.0 1.0 0.0 1.0 Obs No 15 rank= 0.37 0.0 1.0 Obs No 16 rank= 0.03 0.0 1.0 0.0 1.0 0.0 1.0 Obs No 19 rank= 0.09 0.0 1.0 Obs No 20 rank= 0.3 0.0 1.0 Obs No 17 rank= 0.23 Obs No 21 rank= 0.44 0.0 1.0 Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 23 / 28

Inversion of Rank Tests for Confidence Intervals For the scalar γ case and using the score function ˆb ni = 1 0 ϕ τ (t) = τ I(t < τ) ϕ τ (t)dâ ni (t) = â ni (τ) (1 τ) where ϕ = 1 0 ϕ τ(t)dt = 0 and A 2 (ϕ τ ) = 1 0 (ϕ τ(t) ϕ) 2 dt = τ(1 τ). Thus, a test of the hypothesis H 0 : γ = ξ may be based on â n from solving, and the fact that max{(y x 2 ξ) a X 1 a = (1 τ)x 1 1, a [0, 1] n } (1) S n (ξ) = n 1/2 x 2 ˆb n (ξ) N(0, A 2 (ϕ τ )q 2 n) (2) Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 24 / 28

Inversion of Rank Tests for Confidence Intervals That is, we may compute T n (ξ) = S n (ξ)/(a(ϕ τ )q n ) where q 2 n = n 1 x 2 (I X 1(X 1 X 1) 1 X 1 )x 2. and reject H 0 if T n (ξ) > Φ 1 (1 α/2). Inverting this test, that is finding the interval of ξ s such that the test fails to reject. This is a quite straightforward parametric linear programming problem and provides a simple and effective way to do inference on individual quantile regression coefficients. Unlike the Wald type inference it delivers asymmetric intervals. This is the default approach to parametric inference in quantreg for problems of modest sample size. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 25 / 28

Inference on the Quantile Regression Process Using the quantile score function, ϕ τ (t) = τ I(t < τ) we can consider the quantile rankscore process, where T n (τ) = S n (τ) Q 1 n S n (τ)/(τ(1 τ)). S n = n 1/2 (X 2 ˆX 2 ) ˆb n, ˆX 2 = X 1 (X 1 X 1 ) 1 X 1 X 2, Q n = (X 2 ˆX 2 ) (X 2 ˆX 2 )/n, ˆb n = ( ϕ(t)dâ in (t)) n i=1, Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 26 / 28

Inference on the Quantile Regression Process Theorem: (K & Machado) Under H n : γ(τ) = O(1/ n) for τ (0, 1) the process T n (τ) converges to a non-central Bessel process of order q = dim(γ). Pointwise, T n is non-central χ 2. Related Wald and LR statistics can be viewed as providing a general apparatus for testing goodness of fit for quantile regression models. This approach is closely related to classical p-dimensional goodness of fit tests introduced by Kiefer (1959). When the null hypotheses under consideration involve unknown nuisance parameters things become more interesting. In Koenker and Xiao (2001) we consider this Durbin problem and show that the elegant approach of Khmaladze (1981) yields practical methods. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 27 / 28

Four Concluding Comments about Inference Asymptotic inference for quantile regression poses some statistical challenges since it involves elements of nonparametric density estimation, but this shouldn t be viewed as a major obstacle. Classical rank statistics and Hájek s rankscore process are closely linked via Gutenbrunner and Jurečková s regression rankscore process, providing an attractive approach to many inference problems while avoiding density estimation. Inference on the quantile regression process can be conducted with the aid of Khmaladze s extension of the Doob-Meyer construction. Resampling offers many further lines of development for inference in the quantile regression setting. Roger Koenker (UIUC) Introduction Aarhus: 21.6.2010 28 / 28