Inference based on robust estimators Part 2 Matias Salibian-Barrera 1 Department of Statistics University of British Columbia ECARES - Dec 2007 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 1 / 65
General approach Fixed point equations ˆθ n = g n (ˆθ n ) Bootstrap the equations at the full-data estimator θ n = g n(ˆθ n ) Fast (e.g. weighted mean, weighted least squares) Underestimate variability (weights are not recomputed) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 2 / 65
General approach ˆθ n = g n (ˆθ n ) = g n (θ) + g n (θ) (ˆθ n θ) + R n n(ˆθ n θ) = [I g n (θ)] 1 n (g n (θ) θ) + o p (1) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 3 / 65
n (g n(ˆθ n ) ˆθ n ) n (g n(θ) θ) n (g n (θ) θ) n(ˆθn θ) [I g n (θ)] 1 n (g n(ˆθ n ) ˆθ ) n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 4 / 65
n(ˆθ n θ n ) n(ˆθ n θ) [I g n (θ)] 1 n (g n(ˆθ n ) ˆθ ) n ˆθ R n ˆθ [ ] 1 n = I g n (ˆθ n ) (g n(ˆθ n ) ˆθ ) n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 5 / 65
Applications Linear regression Standard errors (S-B and Zamar, 2002) Tests of hypotheses (S-B, 2005) Model selection (S-B and van Aelst, 2007) Multivariate location / scatter PCA (S-B, van Aelst, and Willems, 2006) Discriminant analysis (S-B, van Aelst, and Willems, 2007) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 6 / 65
Model selection Linear regression (y 1, x 1 ),..., (y n, x n ) Let α denote a subset of p α indices from {1, 2,..., p} y i = x αi β α + σ α ɛ αi i = 1,..., n, Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 7 / 65
all models α A are submodels of a full model ˆσ n S-scale estimate of full model For each model α A, the regression estimator ˆβ α,n solves 1 n n i=1 ρ 1 ( ) yi x αi ˆβα,n x i = 0. ˆσ n expected prediction error (conditional on the observed data) [ n ( M pe (α) = σ2 n E z i x αi ρ ˆβ ) ] α σ y, X, i=1 where z = (z 1,..., z n ) are future responses at X, independent of y, Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 8 / 65
Goodness of fit [ n ( σ 2 n E y i x ρ ˆβ )] αi α. σ i=1 parsimonious models are preferred Müller and Welsh (2005) { [ n ( M ppe (α) = σ2 y i x E ρ ˆβ )] } αi α + δ(n) p α + M pe (α), n σ i=1 where δ(n) δ(n)/n 0 (δ(n) = log(n)) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 9 / 65
Criteria [ n ( Mm,n(α) pe = ˆσ2 n n E yi x ˆβ ) ] αi α,n ρ ˆσ n y, X, i=1 { n ( Mm,n(α) ppe = ˆσ2 n yi x ˆβ ) } αi α,n ρ + δ(n) p α n ˆσ n E is the bootstrap mean select α A such that i=1 ˆα pe m, n = arg min α A Mpe m,n(α), + M pe m,n(α), ˆα ppe m, n = arg min α A Mppe m,n(α). Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 10 / 65
A c A such that β α contain all non-zero components of β In what follows we will assume that A c is not empty. The smallest model in A c will be true model α 0 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 11 / 65
Theorem Assume that (A1) n 1 x α i x α i Γ α > 0, n 1 ω α i x α i x α i Γ ω α > 0, and n 1 x α i 4 <, (A2) δ(n) = o(n/m) and m = o(n); (A3) n i=1 ρ 1 (r i(ˆβ α,n )/ˆσ n )x αi = 0, (A4) ˆσ n σ = O p (1/ n), ˆβ α,n β α = O p (1/ n); (A5) ρ 1 and ρ 1 are uniformly continuous, var(ρ 1 (ɛ α 0 )) <, var(ρ 1 (ɛ α 0 )) < and E(ρ 1 (ɛ α 0 )) > 0; and (A6) for any α / A c, var(ρ 1 (ɛ α)) < and with probability one lim inf n 1 n n i=1 1 ρ 1 (r i (ˆβ α )/ˆσ n ) > lim n n n ρ 1 (r i (ˆβ α0,n)/ˆσ n ). i=1 Then lim n P(ˆαppe m,n = α 0 ) = lim n P(ˆα pe m,n = α 0 ) = 1. Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 12 / 65
Example Los Angeles Ozone Pollution Data 366 daily observations on 9 variables Full model includes all second order interactions p = 45 Computational complexity Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 13 / 65
Example Backward elimination Starting from the full model Select the size-(k 1) model with best selection criteria Iterate Reduces search from 2 p to p(p + 1)/2 models Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 14 / 65
Using min α A M pe m,n(α) p = 6 Using min α A M ppe m,n(α) p = 7 Full model p = 45 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 15 / 65
Prediction error 5-fold CV trimmed (γ) prediction error estimators Full model p = 10 p = 7 p = 45 γ TMSE ρ TMSE ρ TMSE ρ 0.05 11.67 5.36 10.45 5.03 10.78 5.03 0.10 9.18 8.35 8.33 ˆα pe m,n ˆα ppe m,n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 16 / 65
Diagnostic plots Standardized residuals 6 4 2 0 2 4 Standardized residuals 6 4 2 0 2 4 0 10 20 30 Fitted Values 0 10 20 30 Fitted Values Standardized residuals 6 4 2 0 2 4 0 10 20 30 Fitted Values Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 17 / 65
Average time (CPU seconds) to bootstrap an MM-regression estimator 1000 times on samples of size 200 p FRB CB 25 8 1955 35 28 4300 45 35 10700 Full model selection analysis on the Ozone dataset (p = 45) is reduced from 15 days (360 hours) to 4 hours. Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 18 / 65
Discriminant Analysis (Randles et al. 1978; Hawkins and McLachlan, 1997; Croux and Dehon, 2001; Hubert and Van Driessen, 2004) Populations π 1, π 2 with parameters µ 1, µ 2, Σ 1 and Σ 2 If Σ 1 Σ 2 we classify x π 1 if d1 Q(x) > d 2 Q (x) where d Q j (x) = 1 2 log Σ j 1 2 (x µ j) Σ 1 j (x µ j ) If Σ 1 = Σ 2 = Σ we classify x π 1 if d1 L(x) > d 2 L (x) where d L j (x) = µ j Σ 1 x 1 2 µ j Σ 1 µ j Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 19 / 65
Discriminant Analysis (Randles et al. 1978; Hawkins and McLachlan, 1997; Croux and Dehon, 2001; Hubert and Van Driessen, 2004) Populations π 1, π 2 with parameters µ 1, µ 2, Σ 1 and Σ 2 If Σ 1 Σ 2 we classify x π 1 if d1 Q(x) > d 2 Q (x) where d Q j (x) = 1 2 log Σ j 1 2 (x µ j) Σ 1 j (x µ j ) If Σ 1 = Σ 2 = Σ we classify x π 1 if d1 L(x) > d 2 L (x) where d L j (x) = µ j Σ 1 x 1 2 µ j Σ 1 µ j Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 19 / 65
Discriminant Analysis (Randles et al. 1978; Hawkins and McLachlan, 1997; Croux and Dehon, 2001; Hubert and Van Driessen, 2004) Populations π 1, π 2 with parameters µ 1, µ 2, Σ 1 and Σ 2 If Σ 1 Σ 2 we classify x π 1 if d1 Q(x) > d 2 Q (x) where d Q j (x) = 1 2 log Σ j 1 2 (x µ j) Σ 1 j (x µ j ) If Σ 1 = Σ 2 = Σ we classify x π 1 if d1 L(x) > d 2 L (x) where d L j (x) = µ j Σ 1 x 1 2 µ j Σ 1 µ j Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 19 / 65
Short intro to robust multivariate estimators Let X 1,..., X n R p f (x, µ, Σ) h (d(x, µ, Σ)) d(x, µ, Σ) = (x µ) Σ 1 (x µ) d i = d(x i, µ, Σ) = (x i µ) Σ 1 (x i µ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 20 / 65
Short intro to robust multivariate estimators The MLE estimators solve n w(d i ) (x i ˆµ) = 0 i=1 n w(d i ) (x i ˆµ) (x i ˆµ) = ˆΣ i=1 where W (d) = 2h (d)/h(d). Multivariate normal, W (d) = 1 Multivariate T η, W (d) = (p + η)/(d + η) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 21 / 65
Short intro to robust multivariate estimators The MLE estimators solve n w(d i ) (x i ˆµ) = 0 i=1 n w(d i ) (x i ˆµ) (x i ˆµ) = ˆΣ i=1 where W (d) = 2h (d)/h(d). Multivariate normal, W (d) = 1 Multivariate T η, W (d) = (p + η)/(d + η) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 21 / 65
Short intro to robust multivariate estimators The MLE estimators solve n w(d i ) (x i ˆµ) = 0 i=1 n w(d i ) (x i ˆµ) (x i ˆµ) = ˆΣ i=1 where W (d) = 2h (d)/h(d). Multivariate normal, W (d) = 1 Multivariate T η, W (d) = (p + η)/(d + η) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 21 / 65
M-estimators M-estimators (Maronna, 1976) n w 1 (d i ) (x i ˆµ) = 0 i=1 n w 2 (d i ) (x i ˆµ) (x i ˆµ) = ˆΣ i=1 S-estimators Davies (1987) subject to Σ = 1 min σ (d(x, µ, Σ)) µ,σ Stahel-Donoho Stahel (1981), Donoho (1982) computational complexity in p > 2 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 22 / 65
M-estimators M-estimators (Maronna, 1976) n w 1 (d i ) (x i ˆµ) = 0 i=1 n w 2 (d i ) (x i ˆµ) (x i ˆµ) = ˆΣ i=1 S-estimators Davies (1987) subject to Σ = 1 min σ (d(x, µ, Σ)) µ,σ Stahel-Donoho Stahel (1981), Donoho (1982) computational complexity in p > 2 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 22 / 65
M-estimators M-estimators (Maronna, 1976) n w 1 (d i ) (x i ˆµ) = 0 i=1 n w 2 (d i ) (x i ˆµ) (x i ˆµ) = ˆΣ i=1 S-estimators Davies (1987) subject to Σ = 1 min σ (d(x, µ, Σ)) µ,σ Stahel-Donoho Stahel (1981), Donoho (1982) computational complexity in p > 2 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 22 / 65
MVE (Minimum Volume Ellipsoid) (Rousseeuw) σ(d 1,..., d n ) = median(d 1,..., d n ) MCD (Minimum Covariance Determinant) (Rousseeuw) σ(d 1,..., d n ) = h i=1 d (i) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 23 / 65
MVE (Minimum Volume Ellipsoid) (Rousseeuw) σ(d 1,..., d n ) = median(d 1,..., d n ) MCD (Minimum Covariance Determinant) (Rousseeuw) σ(d 1,..., d n ) = h i=1 d (i) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 23 / 65
S-estimators (Davies, 1987) solve 1 n n ρ (d i /σ(d)) = b i=1 n w(d i /σ) (x i ˆµ) = 0 i=1 n w(d i /σ) (x i ˆµ) (x i ˆµ) = c ˆΣ i=1 with w(d) = ρ (d). Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 24 / 65
MM-estimators M-estimators with auxiliary scale (Tatsuoka and Tyler, 2000), also (Lopuhaä, 1992) Let ( µ n, Σ n ) be S-estimators and σ n = Σ subject to Γ = 1 min µ,γ n ρ 1 ((x i µ) Γ 1 (x i µ)/ σ n ) i=1 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 25 / 65
Back to Discriminant Analysis ˆd Q j or ˆd L j Plug-in strategy based on S-multivariate estimators ˆµ j and ˆΣ j ˆd Q j (x) = 1 2 log ˆΣ j 1 2 (x ˆµ j) ˆΣ 1 j (x ˆµ j ) ˆd L j (x) = ˆµ j Σ 1 x 1 2 ˆµ j Σ 1 ˆµ j j = 1, 2 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 26 / 65
Back to Discriminant Analysis ˆd Q j or ˆd L j Plug-in strategy based on S-multivariate estimators ˆµ j and ˆΣ j ˆd Q j (x) = 1 2 log ˆΣ j 1 2 (x ˆµ j) ˆΣ 1 j (x ˆµ j ) ˆd L j (x) = ˆµ j Σ 1 x 1 2 ˆµ j Σ 1 ˆµ j j = 1, 2 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 26 / 65
When Σ 1 = Σ 2 Pool ˆΣ = n 1 ˆΣ1 + n 2 ˆΣ2 /(n 1 + n 2 ) 2-sample S-estimators (He and Fung, 2002) 1 n 1 n 1 i=1 ( ρ d (1) i ) + 1 n 2 n 2 j=1 ( ρ d (2) j ) = b Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 27 / 65
When Σ 1 = Σ 2 Pool ˆΣ = n 1 ˆΣ1 + n 2 ˆΣ2 /(n 1 + n 2 ) 2-sample S-estimators (He and Fung, 2002) 1 n 1 n 1 i=1 ( ρ d (1) i ) + 1 n 2 n 2 j=1 ( ρ d (2) j ) = b Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 27 / 65
When Σ 1 = Σ 2 Pool ˆΣ = n 1 ˆΣ1 + n 2 ˆΣ2 /(n 1 + n 2 ) 2-sample S-estimators (He and Fung, 2002) 1 n 1 n 1 i=1 ( ρ d (1) i ) + 1 n 2 n 2 j=1 ( ρ d (2) j ) = b Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 27 / 65
Estimating misclassification error Resubstitution evaluate rule on the data (êr ) Cross-validation computing time Split into training and validation set (Hubert and Van Driessen, 2004) Bootstrap re-compute rule and evaluate on points outside the bootstrap sample (ê B ) (Efron, 1986) ê0.632 = 0.632 ê B + 0.368 ê r Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 28 / 65
Estimating misclassification error Resubstitution evaluate rule on the data (êr ) Cross-validation computing time Split into training and validation set (Hubert and Van Driessen, 2004) Bootstrap re-compute rule and evaluate on points outside the bootstrap sample (ê B ) (Efron, 1986) ê0.632 = 0.632 ê B + 0.368 ê r Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 28 / 65
Estimating misclassification error Resubstitution evaluate rule on the data (êr ) Cross-validation computing time Split into training and validation set (Hubert and Van Driessen, 2004) Bootstrap re-compute rule and evaluate on points outside the bootstrap sample (ê B ) (Efron, 1986) ê0.632 = 0.632 ê B + 0.368 ê r Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 28 / 65
Estimating misclassification error Resubstitution evaluate rule on the data (êr ) Cross-validation computing time Split into training and validation set (Hubert and Van Driessen, 2004) Bootstrap re-compute rule and evaluate on points outside the bootstrap sample (ê B ) (Efron, 1986) ê0.632 = 0.632 ê B + 0.368 ê r Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 28 / 65
Estimating misclassification error Resubstitution evaluate rule on the data (êr ) Cross-validation computing time Split into training and validation set (Hubert and Van Driessen, 2004) Bootstrap re-compute rule and evaluate on points outside the bootstrap sample (ê B ) (Efron, 1986) ê0.632 = 0.632 ê B + 0.368 ê r Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 28 / 65
Estimating misclassification error Resubstitution evaluate rule on the data (êr ) Cross-validation computing time Split into training and validation set (Hubert and Van Driessen, 2004) Bootstrap re-compute rule and evaluate on points outside the bootstrap sample (ê B ) (Efron, 1986) ê0.632 = 0.632 ê B + 0.368 ê r Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 28 / 65
Simulation Study p = 3 π 1 π 2 A: 50 N(0, I) 50 N(1, I) B: 40 N(0, I) + 10 N(5, 0.25 2 I) 40 N(1, I) + 10 N( 4, 0.25 2 I) C: 80 N(0, I) + 20 N(5, 0.25 2 I) 8 N(1, I) + 2 N( 4, 0.25 2 I) D: 16 N(0, I) + 4 N(0, 25 2 I) 16 N(1, I) + 4 N(1, 25 2 I) E: 58 N(0, I) + 12 N(5, 0.25 2 I) 25 N(1, 4I) + 5 N( 10, 0.25 2 I) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 29 / 65
Root mean squared error of the missclassification error A B C D E time B = 10 0.045 0.047 0.067 0.076 0.094 0.03 error d 0.632 B = 30 0.044 0.045 0.065 0.072 0.093 0.06 (FRB) B = 100 0.044 0.045 0.064 0.070 0.093 0.16 B = 10 0.046 0.055 0.065 0.080 0.091 1.37 error d 0.632 B = 30 0.045 0.051 0.062 0.076 0.089 4.12 (Classical) B = 100 0.045 0.049 0.062 0.074 0.088 13.72 k = 5 0.048 0.053 0.067 0.092 0.094 0.64 error d CV k = 10 0.048 0.051 0.068 0.086 0.096 1.34 k = n 0.048 0.048 0.068 0.085 0.097 13.80 error d resub 0.053 0.050 0.072 0.093 0.117 0.00 True error rate 0.204 0.205 0.215 0.223 0.290 1000 samples 50% BP S-multivariate estimators p = 3 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 30 / 65
Tests for nested linear hypotheses Scores-type tests (Markatou et al. 1991) y i = x i β + ɛ i, i = 1,..., n, β = (β 1, β 2) H 0 : β 2 = 0 versus H a : β 2 0 W 2 n = n 1 S n (ˆβ (0) n ) t Û 1 S n (ˆβ (0) n ) S n (ˆβ (0) n ) = n ρ 1((y i x i (1) i=1 ˆβ (0) n )/ˆσ n ) x i (2) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 31 / 65
Scores-type tests where Û = ˆQ 22 ˆM 21 ˆM 1 11 ˆQ 12 ˆQ 21 ˆM 1 11 ˆM 12 + ˆM 21 ˆM 1 11 ˆQ 11 ˆM 1 11 ˆM 12. and [ M = E[ρ 1(r) xx M11 M ] = 12 M 21 M 22 ], [ ] Q = E[ρ 1(r) 2 xx Q11 Q ] = 12, Q 21 Q 22 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 32 / 65
Example with asymmetric outliers Empirical distribution of the test statistic 1,000 random samples from y i = β 0 + β x i + ɛ i, i = 1,..., 100 β 0 = 1, β = (1, 1, 0, 0, 0, 0) F e (x) = 0.70 Φ (x) + 0.30 Φ ((x 4)/ 0.2) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 33 / 65
QQplot scores-type test statistic Chi-squared quantiles 0 5 10 15 20 0 10 20 30 W2 quantiles Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 34 / 65
Bootstrapping test statistics Bootstrapping test statistics (Fisher and Hall, 1990; Hall and Wilson, 1991) Null data : ỹ i = x i ˆβ (0) n + r (a) i Bootstrap samples under H 0 : ỹi = x (0) i ˆβ n + r (a) i Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 35 / 65
R (0) Let β n and H a respectively. β R (a) n W 2 R n be the Robust Bootstrap re-calculations under H 0 and = n 1 S R n ( β S R n (β) = n i=1 ˆp = # R (0) n ρ 1((ỹ i R (0) ) [U R ] 1 S R n ( β n ), β x i ) / σ (a) n ) x i (2) { }/ Wn 2 R > Wn 2 B, Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 36 / 65
Example y = β 0 + β x + e, F e (u) = 0.8 Φ(u) + 0.20 Φ((u 5)/0.2) β j = 1, j = 0,..., 20, β j = 0, j = 21,..., 40 n = 5000 x N (0, I) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 37 / 65
Standardized residuals Residuals 3 2 1 0 1 2 3 4 0 1000 2000 3000 4000 5000 Index Outliers are well detected Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 38 / 65
Example H 0 : β j = 0 j = 21,..., 40 vs H a : β i 0 for some i = 21,..., 40 > library(robustbase) > m0 <- lmrob(y x0, x=true, y=true) > ma <- lmrob(y xa, x=true, y=true) > st <- scores.test(m0, ma) > st$a.p # Asymptotic chiˆ2 approximation [1] 0.02 > st$b.p [1] 0.44 # Robust Bootstrap approximation Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 40 / 65
Simulation results n p 0 p a ɛ ˆα A ˆα RB 20 2 4 0.00 0.12 0.06 0.10 0.24 0.06 0.20 0.32 0.06 200 5 10 0.00 0.05 0.05 0.10 0.17 0.05 0.20 0.39 0.05 5000 10 20 0.00 0.05 0.05 0.10 0.19 0.05 0.20 0.47 0.04 20 40 0.00 0.06 0.06 0.10 0.28 0.07 0.20 0.67 0.07 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 41 / 65
Cleaning the data Clean the data and then perform MLE... as if nothing happened Objective versus subjective rules Subjective rules are intractable (Relles and Rogers, 1977, Monte Carlo!) Objective rules (Dupuis and Hamilton, 2000) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 42 / 65
Cleaning the data Clean the data and then perform MLE... as if nothing happened Objective versus subjective rules Subjective rules are intractable (Relles and Rogers, 1977, Monte Carlo!) Objective rules (Dupuis and Hamilton, 2000) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 42 / 65
Cleaning the data Clean the data and then perform MLE... as if nothing happened Objective versus subjective rules Subjective rules are intractable (Relles and Rogers, 1977, Monte Carlo!) Objective rules (Dupuis and Hamilton, 2000) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 42 / 65
Cleaning the data Clean the data and then perform MLE... as if nothing happened Objective versus subjective rules Subjective rules are intractable (Relles and Rogers, 1977, Monte Carlo!) Objective rules (Dupuis and Hamilton, 2000) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 42 / 65
Cleaning the data Clean the data and then perform MLE... as if nothing happened Objective versus subjective rules Subjective rules are intractable (Relles and Rogers, 1977, Monte Carlo!) Objective rules (Dupuis and Hamilton, 2000) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 42 / 65
Hard-rejection rules Fit a robust estimator Calculate a robust estimate of the standard deviation of the residuals, ˆσ Fix a number c > 0 and drop any observation with a residual larger than c ˆσ (typically 2 c 3). Apply classical methods to the remaining data Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 43 / 65
Hard-rejection rules Fit a robust estimator Calculate a robust estimate of the standard deviation of the residuals, ˆσ Fix a number c > 0 and drop any observation with a residual larger than c ˆσ (typically 2 c 3). Apply classical methods to the remaining data Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 43 / 65
Monte Carlo 100000 samples n β HRR estimates Monte Carlo estimate p = 2 20 β 0 0.205 (0.046) 0.256 β 1 0.227 (0.051) 0.283 50 β 0 0.133 (0.017) 0.152 β 1 0.138 (0.017) 0.156 p = 4 20 β 0 0.182 (0.057) 0.322 β 1 0.164 (0.051) 0.410 β 2 0.173 (0.054) 0.478 β 3 0.177 (0.056) 0.295 50 β 0 0.135 (0.018) 0.159 β 1 0.144 (0.019) 0.170 β 2 0.145 (0.019) 0.171 β 3 0.132 (0.018) 0.157 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 44 / 65
Robust confidence intervals Adrover, S-B, Zamar (2004) H ɛ = {F = (1 ɛ) F µ,σ + ɛ H} F µ,σ (u) = F 0 ((u µ) /σ) (L n, U n ) is globally robust if: 1 Stable lim n inf P F (L n < θ < U n ) (1 α) F H ɛ 2 Informative lim sup (U n L n ) < n F H ɛ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 45 / 65
Robust confidence intervals Adrover, S-B, Zamar (2004) H ɛ = {F = (1 ɛ) F µ,σ + ɛ H} F µ,σ (u) = F 0 ((u µ) /σ) (L n, U n ) is globally robust if: 1 Stable lim n inf P F (L n < θ < U n ) (1 α) F H ɛ 2 Informative lim sup (U n L n ) < n F H ɛ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 45 / 65
Robust confidence intervals Adrover, S-B, Zamar (2004) H ɛ = {F = (1 ɛ) F µ,σ + ɛ H} F µ,σ (u) = F 0 ((u µ) /σ) (L n, U n ) is globally robust if: 1 Stable lim n inf P F (L n < θ < U n ) (1 α) F H ɛ 2 Informative lim sup (U n L n ) < n F H ɛ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 45 / 65
Robust confidence intervals Adrover, S-B, Zamar (2004) H ɛ = {F = (1 ɛ) F µ,σ + ɛ H} F µ,σ (u) = F 0 ((u µ) /σ) (L n, U n ) is globally robust if: 1 Stable lim n inf P F (L n < θ < U n ) (1 α) F H ɛ 2 Informative lim sup (U n L n ) < n F H ɛ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 45 / 65
Simple examples that do not work X n ± t α/2 (n 1) S n/ n Sn 2 = 1 n ( Xi n 1 X ) 2 n i=1 Consider F x0 = (1 ɛ)f 0 + ɛδ x0. Then X n ± t α/2 (n 1) S n/ n a.s. ɛ x 0 > 0 n lim n inf P F (L n < 0 < U n ) lim P Fx0 (L n < 0 < U n ) = 0 F H ɛ n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 46 / 65
Simple examples that do not work X n ± t α/2 (n 1) S n/ n Sn 2 = 1 n ( Xi n 1 X ) 2 n i=1 Consider F x0 = (1 ɛ)f 0 + ɛδ x0. Then X n ± t α/2 (n 1) S n/ n a.s. ɛ x 0 > 0 n lim n inf P F (L n < 0 < U n ) lim P Fx0 (L n < 0 < U n ) = 0 F H ɛ n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 46 / 65
Now consider F x0 = (1 ɛ)f 0 + ɛ [δ x0 /2 + δ x0 /2] lim sup (U n L n ) n F H ɛ lim sup (U n L n ) = + n x 0 R + If we replace X n and S n by robust ˆµ n and ˆσ, ˆµ n ± 1.96ˆσ/ n informative not stable Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 47 / 65
Now consider F x0 = (1 ɛ)f 0 + ɛ [δ x0 /2 + δ x0 /2] lim sup (U n L n ) n F H ɛ lim sup (U n L n ) = + n x 0 R + If we replace X n and S n by robust ˆµ n and ˆσ, ˆµ n ± 1.96ˆσ/ n informative not stable Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 47 / 65
Now consider F x0 = (1 ɛ)f 0 + ɛ [δ x0 /2 + δ x0 /2] lim sup (U n L n ) n F H ɛ lim sup (U n L n ) = + n x 0 R + If we replace X n and S n by robust ˆµ n and ˆσ, ˆµ n ± 1.96ˆσ/ n informative not stable Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 47 / 65
Let r n and l n satisfy P F ( r n ˆµ n µ l n ) = 1 α (ˆµ n l n, ˆµ n + r n ) P F ( r n ˆµ n µ(f ) + µ(f ) µ l n ) = ( rn b P F ˆµ n µ(f ) l ) n b = v n v n v n b = µ(f ) µ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 48 / 65
Let r n and l n satisfy P F ( r n ˆµ n µ l n ) = 1 α (ˆµ n l n, ˆµ n + r n ) P F ( r n ˆµ n µ(f ) + µ(f ) µ l n ) = ( rn b P F ˆµ n µ(f ) l ) n b = v n v n v n b = µ(f ) µ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 48 / 65
Let r n and l n satisfy P F ( r n ˆµ n µ l n ) = 1 α (ˆµ n l n, ˆµ n + r n ) P F ( r n ˆµ n µ(f ) + µ(f ) µ l n ) = ( rn b P F ˆµ n µ(f ) l ) n b = v n v n v n b = µ(f ) µ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 48 / 65
Let r n and l n satisfy P F ( r n ˆµ n µ l n ) = 1 α (ˆµ n l n, ˆµ n + r n ) P F ( r n ˆµ n µ(f ) + µ(f ) µ l n ) = ( rn b P F ˆµ n µ(f ) l ) n b = v n v n v n b = µ(f ) µ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 48 / 65
Assume n (ˆµn µ(f )) D n N ( 0, V 2 (F ) ) F H ɛ µ(f ) µ < µ F H ɛ (if we knew b) ( ) ( ) ln b rn + b Φ + Φ = 1 α v n v n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 49 / 65
Assume n (ˆµn µ(f )) D n N ( 0, V 2 (F ) ) F H ɛ µ(f ) µ < µ F H ɛ (if we knew b) ( ) ( ) ln b rn + b Φ + Φ = 1 α v n v n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 49 / 65
Assume n (ˆµn µ(f )) D n N ( 0, V 2 (F ) ) F H ɛ µ(f ) µ < µ F H ɛ (if we knew b) ( ) ( ) ln b rn + b Φ + Φ = 1 α v n v n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 49 / 65
r n = v n z α/2 b l n = v n z α/2 + b (ˆµn v n z α/2 b, ˆµ n + v n z α/2 b ) (ˆµn v n z α/2 µ, ˆµ n + v n z α/2 + µ ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 50 / 65
r n = v n z α/2 b l n = v n z α/2 + b (ˆµn v n z α/2 b, ˆµ n + v n z α/2 b ) (ˆµn v n z α/2 µ, ˆµ n + v n z α/2 + µ ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 50 / 65
Alternative Fraiman et al. (2001) P F ( ˆµ n µ q n ) = 1 α ( ) ˆqn b Φ + Φ v n ( ˆqn + b v n ) 1 = 1 α q n (b 1 ) < q n (b 2 ) if b 1 < b 2 ( ) qn µ Φ + Φ v n ( qn + µ v n ) 1 = 1 α These intervals are shorter: ˆµ n ± q n v n z α/2 + µ > q n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 51 / 65
p-values H 0 : µ µ 0 versus H 1 : µ > µ 0 Reject H 0 if ˆµ > µ 0 + v n Φ 1 (1 α) b(f ) sup P F (ˆµ > µ0 + v n Φ 1 (1 α) b(f ) ) = µ µ 0 sup P F (ˆµ µ > µ0 µ + v n Φ 1 (1 α) b(f ) ) µ µ 0 P F (ˆµ µ0 > µ 0 µ 0 + v n Φ 1 (1 α) b(f ) ) = α Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 52 / 65
p-values H 0 : µ µ 0 versus H 1 : µ > µ 0 Reject H 0 if ˆµ > µ 0 + v n Φ 1 (1 α) b(f ) sup P F (ˆµ > µ0 + v n Φ 1 (1 α) b(f ) ) = µ µ 0 sup P F (ˆµ µ > µ0 µ + v n Φ 1 (1 α) b(f ) ) µ µ 0 P F (ˆµ µ0 > µ 0 µ 0 + v n Φ 1 (1 α) b(f ) ) = α Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 52 / 65
p-values H 0 : µ µ 0 versus H 1 : µ > µ 0 Reject H 0 if ˆµ > µ 0 + v n Φ 1 (1 α) b(f ) sup P F (ˆµ > µ0 + v n Φ 1 (1 α) b(f ) ) = µ µ 0 sup P F (ˆµ µ > µ0 µ + v n Φ 1 (1 α) b(f ) ) µ µ 0 P F (ˆµ µ0 > µ 0 µ 0 + v n Φ 1 (1 α) b(f ) ) = α Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 52 / 65
{ } p-value = inf α : ˆµ > µ 0 + v n Φ 1 (1 α) b(f ) ( ) ˆµ µ0 b(f ) 1 Φ v n ( ) ˆµ µ0 µ 1 Φ v n Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 53 / 65
µ is not the maximum (asymptotic) bias B F0 (ɛ) = sup µ(f ) µ(f 0 ) /σ 0 F H ɛ(f 0 ) µ = k ˆσ B F0 (ɛ) k = σ (F 0 ) sup F H ɛ(f 0 ) σ (F ) = 1 σ 1 (ɛ) σ 1 (ɛ) = σ (F ) inf F H ɛ(f 0 ) σ (F 0 ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 54 / 65
µ is not the maximum (asymptotic) bias B F0 (ɛ) = sup µ(f ) µ(f 0 ) /σ 0 F H ɛ(f 0 ) µ = k ˆσ B F0 (ɛ) k = σ (F 0 ) sup F H ɛ(f 0 ) σ (F ) = 1 σ 1 (ɛ) σ 1 (ɛ) = σ (F ) inf F H ɛ(f 0 ) σ (F 0 ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 54 / 65
µ is not the maximum (asymptotic) bias B F0 (ɛ) = sup µ(f ) µ(f 0 ) /σ 0 F H ɛ(f 0 ) µ = k ˆσ B F0 (ɛ) k = σ (F 0 ) sup F H ɛ(f 0 ) σ (F ) = 1 σ 1 (ɛ) σ 1 (ɛ) = σ (F ) inf F H ɛ(f 0 ) σ (F 0 ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 54 / 65
µ is not the maximum (asymptotic) bias B F0 (ɛ) = sup µ(f ) µ(f 0 ) /σ 0 F H ɛ(f 0 ) µ = k ˆσ B F0 (ɛ) k = σ (F 0 ) sup F H ɛ(f 0 ) σ (F ) = 1 σ 1 (ɛ) σ 1 (ɛ) = σ (F ) inf F H ɛ(f 0 ) σ (F 0 ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 54 / 65
Finite sample behaviour Location - scale model n i=1 Ψ D 1.345 ( ) Yi ˆµ = 0 ˆσ Ψ D c length-optimal (Fraiman et al. 2001) ˆσ is an S-scale estimator Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 55 / 65
ɛ n Globally Robust Naive x 0 = 1.5 x 0 = 4.0 x 0 = 1.5 x 0 = 4.0 0.00 20 0.94 (0.88) 0.94 (0.88) 0.94 (0.88) 0.94 (0.88) 40 0.94 (0.64) 0.94 (0.64) 0.94 (0.64) 0.94 (0.64) 100 0.94 (0.41) 0.94 (0.41) 0.94 (0.41) 0.94 (0.41) 500 0.93 (0.18) 0.93 (0.18) 0.93 (0.18) 0.93 (0.18) 0.01 20 0.93 (0.89) 0.94 (0.88) 0.94 (0.89) 0.94 (0.89) 40 0.95 (0.64) 0.95 (0.64) 0.95 (0.64) 0.95 (0.63) 100 0.95 (0.41) 0.95 (0.41) 0.95 (0.41) 0.95 (0.41) 500 0.94 (0.19) 0.95 (0.19) 0.94 (0.18) 0.94 (0.18) 0.05 20 0.94 (0.95) 0.93 (0.94) 0.93 (0.91) 0.93 (0.91) 40 0.95 (0.74) 0.95 (0.73) 0.93 (0.67) 0.93 (0.67) 100 0.95 (0.54) 0.95 (0.54) 0.88 (0.43) 0.88 (0.44) 500 0.96 (0.35) 0.96 (0.35) 0.62 (0.20) 0.61 (0.20) 0.10 20 0.95 (1.15) 0.95 (1.21) 0.91 (0.96) 0.92 (1.01) 40 0.97 (0.98) 0.97 (1.03) 0.86 (0.70) 0.88 (0.75) 100 0.98 (0.81) 0.98 (0.86) 0.65 (0.44) 0.69 (0.49) 500 1.00 (0.63) 1.00 (0.66) 0.05 (0.20) 0.05 (0.22) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 56 / 65
Simple linear regression Asymptotic Distribution n (ˆβ n β(f )) D n N (0, V (F )) Bias Bound sup ˆβ n β(f ) = σ e B 1 (ɛ) σ x F H ɛ Computable and small maximum bias Normal distribution over the whole contamination neighbourhood Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 57 / 65
Generalized Median of Slopes (Brown and Mood, 1951; Adrover, S-B, Zamar, 2004) n ( ) sign y i ˆα ˆβ (x i ˆµ x ) sign (x i ˆµ x ) = 0 i=1 n ( ) sign y i ˆα ˆβ (x i ˆµ x ) i=1 = 0 where ˆµ x = median (x 1,..., x n ) Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 58 / 65
ɛ MS GMS B 1 (ɛ) 0.010 0.014 0.016 0.016 0.025 0.039 0.041 0.044 0.050 0.081 0.088 0.100 0.100 0.174 0.201 0.261 0.150 0.282 0.361 0.547 0.200 0.411 0.639 1.160 0.240 0.538 1.299 2.787 0.250 0.574 MS = median (y i /x i ) Minimax (asymptotic) bias for Y = β X + ɛ Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 59 / 65
ɛ n Mild Medium Strong 20 0.99 (2.01) 0.99 (2.01) 0.99 (2.01) 40 0.97 (1.19) 0.97 (1.19) 0.97 (1.27) 5% 60 0.96 (0.97) 0.96 (0.98) 0.97 (1.03) 80 0.94 (0.85) 0.94 (0.85) 0.97 (0.89) 100 0.94 (0.76) 0.95 (0.77) 0.97 (0.81) 200 0.95 (0.61) 0.96 (0.61) 0.97 (0.62) 20 0.99 (2.00) 0.99 (2.02) 0.99 (2.43) 40 0.98 (1.34) 0.98 (1.36) 0.99 (1.69) 10% 60 0.95 (1.17) 0.94 (1.19) 0.99 (1.44) 80 0.93 (1.08) 0.92 (1.11) 0.99 (1.30) 100 0.93 (1.01) 0.92 (1.05) 0.99 (1.20) 200 0.93 (0.87) 0.93 (0.92) 0.99 (1.00) Bootstrap standard deviations Mild: (3, 1.5[2]) Medium: (5, 2.5) Strong: (5, 15) β = 0 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 60 / 65
Example Motorola returns Motorola shares versus 30-day US Treasury bills January 1978 to December 1987 Model Motorola i = α + β Market i + ɛ i Point estimates Estimator ˆβ GMS 1.25 MM 1.34 LS 0.85 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 61 / 65
Market Return Motorola Return -0.2-0.1 0.0 0.1-0.3-0.2-0.1 0.0 0.1 0.2 MM GMS LS Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 62 / 65
Hypothesis of interest H 0 : β > 1 versus H a : β 1 ɛ 1/120 0.01 ˆσ e /ˆσ x = 1.41 Bias bound: ˆ β = 0.0225 p-value 1 Φ ((0.25 0.0225) /0.169) = 0.089 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 63 / 65
ɛ n Mild Medium Strong 20 0.90 (1.28) 0.90 (1.30) 0.89 (1.33) 40 0.92 (0.98) 0.92 (1.00) 0.92 (1.02) 5% 60 0.91 (0.84) 0.91 (0.85) 0.93 (0.87) 80 0.91 (0.75) 0.92 (0.77) 0.94 (0.78) 100 0.90 (0.69) 0.92 (0.71) 0.94 (0.72) 200 0.93 (055) 0.94 (0.56) 0.95 (0.57) 20 0.90 (1.42) 0.91 (1.46) 0.92 (1.64) 40 0.91 (1.13) 0.91 (1.18) 0.94 (1.33) 10% 60 0.91 (1.01) 0.90 (1.09) 0.95 (1.20) 80 0.89 (0.95) 0.89 (1.01) 0.96 (1.10) 100 0.89 (0.89) 0.88 (0.96) 0.97 (1.05) 200 0.89 (0.76) 0.90 (0.84) 0.98 (0.90) Empirical approximation to the asymptotic variance Mild: (3, 1.5[2]) Medium: (5, 2.5) Strong: (5, 15) β = 0 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 64 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65
Difficulty of estimating asymptotic variance Need asymptotic normal distribution for all F H ɛ Need bias bounds error scale estimation correction for scale estimation yields even longer CIs Can we avoid using bias bounds? Can we avoid requiring global asymptotic distribution? Can we avoid estimating the SD of the estimator? Matias Salibian-Barrera (UBC) Robust inference (2) ECARES - Dec 2007 65 / 65