Bootstrap Methods and their Application
|
|
- Valentine Haynes
- 6 years ago
- Views:
Transcription
1 Bootstrap Methods and their Application Anthony Davison c 2006 A short course based on the book Bootstrap Methods and their Application, by A C Davison and D V Hinkley c Cambridge University Press, 1997 Anthony Davison: Bootstrap Methods and their Application, 1
2 Summary Bootstrap: simulation methods for frequentist inference Useful when standard assumptions invalid (n small, data not normal, ); standard problem has non-standard twist; complex problem has no (reliable) theory; or (almost) anywhere else Aim to describe basic ideas; confidence intervals; tests; some approaches for regression Anthony Davison: Bootstrap Methods and their Application, 2
3 Motivation Motivation Basic notions Confidence intervals Several samples Variance estimation Tests Regression Anthony Davison: Bootstrap Methods and their Application, 3
4 Motivation AIDS data UK AIDS diagnoses Reporting delay up to several years! Problem: predict state of epidemic at end 1992, with realistic statement of uncertainty Simple model: number of reports in row j and column k Poisson, mean µ jk = exp(α j + β k ) Unreported diagnoses in period j Poisson, mean µ jk = exp(α j ) exp(β k ) k unobs k unobs Estimate total unreported diagnoses from period j by replacing α j and β k by MLEs How reliable are these predictions? How sensible is the Poisson model? Anthony Davison: Bootstrap Methods and their Application, 4
5 Motivation Diagnosis Reporting-delay interval (quarters): Total period reports to end Year Quarter of Anthony Davison: Bootstrap Methods and their Application, 5
6 Motivation AIDS data Data (+), fits of simple model (solid), complex model (dots) Variance formulae could be derived painful! but useful? Effects of overdispersion, complex model,? Diagnoses Time Anthony Davison: Bootstrap Methods and their Application, 6
7 Motivation Goal Find reliable assessment of uncertainty when estimator complex data complex sample size small model non-standard Anthony Davison: Bootstrap Methods and their Application, 7
8 Basic notions Motivation Basic notions Confidence intervals Several samples Variance estimation Tests Regression Anthony Davison: Bootstrap Methods and their Application, 8
9 Basic notions Handedness data Table: Data from a study of handedness; hand is an integer measure of handedness, and dnan a genetic measure Data due to Dr Gordon Claridge, University of Oxford dnan hand dnan hand dnan hand dnan hand Anthony Davison: Bootstrap Methods and their Application, 9
10 Basic notions Handedness data Figure: Scatter plot of handedness data The numbers show the multiplicities of the observations hand dnan Anthony Davison: Bootstrap Methods and their Application, 10
11 Basic notions Handedness data Is there dependence between dnan and hand for these n = 37 individuals? Sample product-moment correlation coefficient is θ = 0509 Standard confidence interval (based on bivariate normal population) gives 95% CI (0221, 0715) Data not bivariate normal! What is the status of the interval? Can we do better? Anthony Davison: Bootstrap Methods and their Application, 11
12 Basic notions Frequentist inference Estimator θ for unknown parameter θ iid Statistical model: data y 1,,y n F, unknown Handedness data y = (dnan,hand) F puts probability mass on subset of R 2 θ is correlation coefficient Key issue: what is variability of θ when samples are repeatedly taken from F? Imagine F known could answer question by analytical (mathematical) calculation simulation Anthony Davison: Bootstrap Methods and their Application, 12
13 Basic notions Simulation with F known For r = 1,,R: generate random sample y1,,yn compute θ r using y1,, yn; Output after R iterations: iid F; θ 1, θ 2,, θ R Use θ 1, θ 2,, θ R to estimate sampling distribution of θ (histogram, density estimate, ) If R, then get perfect match to theoretical calculation (if available): Monte Carlo error disappears completely In practice R is finite, so some error remains Anthony Davison: Bootstrap Methods and their Application, 13
14 Basic notions Handedness data: Fitted bivariate normal model Figure: Contours of bivariate normal distribution fitted to handedness data; parameter estimates are bµ 1 = 285, bµ 2 = 17, bσ 1 = 54, bσ 2 = 15, bρ = 0509 The data are also shown hand dnan 0000 Anthony Davison: Bootstrap Methods and their Application, 14
15 Basic notions Handedness data: Parametric bootstrap samples Figure: Left: original data, with jittered vertical values Centre and right: two samples generated from the fitted bivariate normal distribution hand Correlation 0509 hand Correlation 0753 hand Correlation dnan dnan dnan Anthony Davison: Bootstrap Methods and their Application, 15
16 Basic notions Handedness data: Correlation coefficient Figure: Bootstrap distributions with R = Left: simulation from fitted bivariate normal distribution Right: simulation from the data by bootstrap resampling The lines show the theoretical probability density function of the correlation coefficient under sampling from a fitted bivariate normal distribution Probability density Probability density Correlation coefficient Correlation coefficient Anthony Davison: Bootstrap Methods and their Application, 16
17 Basic notions F unknown Replace unknown F by estimate F obtained parametrically eg maximum likelihood or robust fit of distribution F(y) = F(y; ψ) (normal, exponential, bivariate normal, ) nonparametrically using empirical distribution function (EDF) of original data y 1,,y n, which puts mass 1/n on each of the y j Algorithm: For r = 1,,R: generate random sample y1,,y n compute θ r using y1,, yn; Output after R iterations: iid F; θ 1, θ 2,, θ R Anthony Davison: Bootstrap Methods and their Application, 17
18 Basic notions Nonparametric bootstrap Bootstrap (re)sample y1,,y n y 1,,y n Repetitions will occur! Compute bootstrap θ using y 1,,y n iid F, where F is EDF of For handedness data take n = 37 pairs y = (dnan,hand) with equal probabilities 1/37 and replacement from original pairs (dnan,hand) Repeat this R times, to get θ 1,, θ R See picture Results quite different from parametric simulation why? Anthony Davison: Bootstrap Methods and their Application, 18
19 Basic notions Handedness data: Bootstrap samples Figure: Left: original data, with jittered vertical values Centre and right: two bootstrap samples, with jittered vertical values hand Correlation 0509 hand Correlation 0733 hand Correlation dnan dnan dnan Anthony Davison: Bootstrap Methods and their Application, 19
20 Basic notions Using the θ Bootstrap replicates θ r used to estimate properties of θ Write θ = θ(f) to emphasize dependence on F Bias of θ as estimator of θ is β(f) = E( θ y 1,,y n iid F) θ(f) estimated by replacing unknown F by known estimate F: β( F) = E( θ y 1,,y n iid F) θ( F) Replace theoretical expectation E() by empirical average: β( F) b = θ θ R = R 1 θ r θ r=1 Anthony Davison: Bootstrap Methods and their Application, 20
21 Basic notions Estimate variance ν(f) = var( θ F) by v = 1 R 1 R ( θ r θ ) 2 r=1 Estimate quantiles of θ by taking empirical quantiles of θ 1,, θ R For handedness data, 10,000 replicates shown earlier give b = 0046, v = 0043 = Anthony Davison: Bootstrap Methods and their Application, 21
22 Basic notions Handedness data Figure: Summaries of the b θ Left: histogram, with vertical line showing b θ Right: normal Q Q plot of b θ Histogram of t Density t* t* Quantiles of Standard Normal Anthony Davison: Bootstrap Methods and their Application, 22
23 Basic notions How many bootstraps? Must estimate moments and quantiles of θ and derived quantities Nowadays often feasible to take R 5000 Need R 100 to estimate bias, variance, etc Need R 100, prefer R 1000 to estimate quantiles needed for 95% confidence intervals R=199 R=999 Z* Z* Theoretical Quantiles Theoretical Quantiles Anthony Davison: Bootstrap Methods and their Application, 23
24 Basic notions Key points Estimator is algorithm applied to original data y 1,, y n gives original θ applied to simulated data y1,, y n gives θ θ can be of (almost) any complexity for more sophisticated ideas (later) to work, θ must often be smooth function of data Sample is used to estimate F F F heroic assumption Simulation replaces theoretical calculation removes need for mathematical skill does not remove need for thought check code very carefully garbage in, garbage out! Two sources of error statistical ( F F) reduce by thought simulation (R ) reduce by taking R large (enough) Anthony Davison: Bootstrap Methods and their Application, 24
25 Confidence intervals Motivation Basic notions Confidence intervals Several samples Variance estimation Tests Regression Anthony Davison: Bootstrap Methods and their Application, 25
26 Confidence intervals Normal confidence intervals If θ approximately normal, then θ N(θ + β,ν), where θ has bias β = β(f) and variance ν = ν(f) If β,ν known, (1 2α) confidence interval for θ would be (D1) θ β ± z α ν 1/2, where Φ(z α ) = α Replace β, ν by estimates: β(f) ν(f) = β( F) = b = θ θ = ν( F) = v = (R 1) 1 ( θ r θ ) 2, r giving (1 2α) interval θ b ± z α v 1/2 Handedness data: R = 10,000, b = 0046, v = , α = 0025, z α = 196, so 95% CI is (0147,0963) Anthony Davison: Bootstrap Methods and their Application, 26
27 Confidence intervals Normal confidence intervals Normal approximation reliable? Transformation needed? Here are plots for ψ = 1 2 log{(1 + θ)/(1 θ)}: Density Transformed correlation coefficient Transformed correlation coefficient Quantiles of Standard Normal Anthony Davison: Bootstrap Methods and their Application, 27
28 Confidence intervals Normal confidence intervals Correlation coefficient: try Fisher s z transformation: for which compute b ψ = R 1 R ψ = ψ( θ) = 1 2 log{(1 + θ)/(1 θ)} r=1 ψ r ψ, v ψ = 1 R 1 R ( ψ r ψ ) 2, r=1 (1 2α) confidence interval for θ is { } ψ 1 ψ { } bψ z 1 α v 1/2 ψ, ψ 1 ψ bψ z α v 1/2 ψ For handedness data, get (0074,0804) But how do we choose a transformation in general? Anthony Davison: Bootstrap Methods and their Application, 28
29 Confidence intervals Pivots Hope properties of θ 1,, θ R mimic effect of sampling from original model Amounts to faith in substitution principle : may replace unknown F with known F false in general, but often more nearly true for pivots Pivot is combination of data and parameter whose distribution is independent of underlying model iid Canonical example: Y 1,,Y n N(µ,σ 2 ) Then Z = Y µ (S 2 /n) 1/2 t n 1, for all µ,σ 2 so independent of the underlying distribution, provided this is normal Exact pivot generally unavailable in nonparametric case Anthony Davison: Bootstrap Methods and their Application, 29
30 Confidence intervals Studentized statistic Idea: generalize Student t statistic to bootstrap setting Requires variance V for θ computed from y 1,,y n Analogue of Student t statistic: Z = θ θ V 1/2 If the quantiles z α of Z known, then ( Pr (z α Z z 1 α ) = Pr z α θ ) θ V 1/2 z 1 α = 1 2α (z α no longer denotes a normal quantile!) implies that Pr ( θ V 1/2 z 1 α θ θ ) V 1/2 z α = 1 2α so (1 2α) confidence interval is ( θ V 1/2 z 1 α, θ V 1/2 z α ) Anthony Davison: Bootstrap Methods and their Application, 30
31 Confidence intervals Bootstrap sample gives ( θ,v ) and hence R bootstrap copies of ( θ,v ): and corresponding z 1 = θ 1 θ V 1/2 1 Z = θ θ V 1/2 ( θ 1,V 1 ), ( θ 2,V 2 ),, ( θ R,V R), z 2 = θ 2 θ V 1/2 2,, zr = θ R θ V 1/2 R Use z1,,z R to estimate distribution of Z for example, order statistics z(1) < < z (R) used to estimate quantiles Get (1 2α) confidence interval θ V 1/2 z ((1 α)(r+1)), θ V 1/2 z (α(r+1)) Anthony Davison: Bootstrap Methods and their Application, 31
32 Confidence intervals Why Studentize? Studentize, so Z D N(0,1) as n Edgeworth series: Pr(Z z F) = Φ(z) + n 1/2 a(z)φ(z) + O(n 1 ); a( ) even quadratic polynomial Corresponding expansion for Z is Pr(Z z F) = Φ(z) + n 1/2 â(z)φ(z) + O p (n 1 ) Typically â(z) = a(z) + O p (n 1/2 ), so Pr(Z z F) Pr(Z z F) = O p (n 1 ) Anthony Davison: Bootstrap Methods and their Application, 32
33 Confidence intervals If don t studentize, Z = ( θ θ) N(0,ν) Then ( z ) ( Pr(Z z F) = Φ ν 1/2 +n 1/2 a z ) ( z ) ν 1/2 φ ν 1/2 +O(n 1 ) and Pr(Z z F) = Φ Typically ν = ν + O p (n 1/2 ), giving D ( z ) ( ν 1/2 +n 1/2 â z ) ( z ) ν 1/2 φ ν 1/2 +O p (n 1 ) Pr(Z z F) Pr(Z z F) = O p (n 1/2 ) Thus use of Studentized Z reduces error from O p (n 1/2 ) to O p (n 1 ) better than using large-sample asymptotics, for which error is usually O p (n 1/2 ) Anthony Davison: Bootstrap Methods and their Application, 33
34 Confidence intervals Other confidence intervals Problem for studentized intervals: must obtain V, intervals not scale-invariant Simpler approaches: Basic bootstrap interval: treat θ θ as pivot, get θ ( θ ((R+1)(1 α)) θ), θ ( θ ((R+1)α) θ) Percentile interval: use empirical quantiles of θ 1,, θ R : θ ((R+1)α), θ ((R+1)(1 α)) Improved percentile intervals (BC a, ABC, ) Replace percentile interval with θ ((R+1)α ), θ ((R+1)(1 α )), where α, α chosen to improve properties Scale-invariant Anthony Davison: Bootstrap Methods and their Application, 34
35 Confidence intervals Handedness data 95% confidence intervals for correlation coefficient θ, R = 10,000: Normal Percentile Basic BC a (α = 00485,α = 00085) Student Basic (transformed) Student (transformed) Transformation is essential here! Anthony Davison: Bootstrap Methods and their Application, 35
36 Confidence intervals General comparison Bootstrap confidence intervals usually too short leads to under-coverage Normal and basic intervals depend on scale Percentile interval often too short but is scale-invariant Studentized intervals give best coverage overall, but depend on scale, can be sensitive to V length can be very variable best on transformed scale, where V is approximately constant Improved percentile intervals have same error in principle as studentized intervals, but often shorter so lower coverage Anthony Davison: Bootstrap Methods and their Application, 36
37 Confidence intervals Caution Edgeworth theory OK for smooth statistics beware rough statistics: must check output Bootstrap of median theoretically OK, but very sensitive to sample values in practice Role for smoothing? Quantiles of Standard Normal T*-t for medians Anthony Davison: Bootstrap Methods and their Application, 37
38 Confidence intervals Key points Numerous procedures available for automatic construction of confidence intervals Computer does the work Need R 1000 in most cases Generally such intervals are a bit too short Must examine output to check if assumptions (eg smoothness of statistic) satisfied May need variance estimate V see later Anthony Davison: Bootstrap Methods and their Application, 38
39 Several samples Motivation Basic notions Confidence intervals Several samples Variance estimation Tests Regression Anthony Davison: Bootstrap Methods and their Application, 39
40 Several samples Gravity data Table: Measurements of the acceleration due to gravity, g, given as deviations from 980, cms 2, in units of cms Series Anthony Davison: Bootstrap Methods and their Application, 40
41 Several samples Gravity data Figure: Gravity series boxplots, showing a reduction in variance, a shift in location, and possible outliers g series Anthony Davison: Bootstrap Methods and their Application, 41
42 Several samples Gravity data Eight series of measurements of gravitational acceleration g made May 1934 July 1935 in Washington DC Data are deviations from 98 m/s 2 in units of 10 3 cm/s 2 Goal: Estimate g and provide confidence interval Weighted combination of series averages and its variance estimate ( 8 i=1 θ = y i n i /s i 8 i=1 n, V = n i/s 2 i /si) 2, i i=1 giving θ = 7854, V = and 95% confidence interval of θ ± 196V 1/2 = (775,798) Anthony Davison: Bootstrap Methods and their Application, 42
43 Several samples Gravity data: Bootstrap Apply stratified (re)sampling to series, taking each series as a separate stratum Compute θ, V for simulated data Confidence interval based on Z = θ θ V 1/2, whose distribution is approximated by simulations giving z 1 = θ 1 θ V 1/2 1,,zR = θ R θ V 1/2, R ( θ V 1/2 z ((R+1)(1 α)), θ V 1/2 z ((R+1)α) ) For 95% limits set α = 0025, so with R = 999 use z(25),z (975), giving interval (771,803) Anthony Davison: Bootstrap Methods and their Application, 43
44 Several samples Figure: Summary plots for 999 nonparametric bootstrap simulations Top: normal probability plots of t and z = (t t)/v 1/2 Line on the top left has intercept t and slope v 1/2, line on the top right has intercept zero and unit slope Bottom: the smallest t r also has the smallest v, leading to an outlying value of z Quantiles of Standard Normal t* Quantiles of Standard Normal z* t* sqrt(v*) z* sqrt(v*) Anthony Davison: Bootstrap Methods and their Application, 44
45 Several samples Key points For several independent samples, implement bootstrap by stratified sampling independently from each Same basic ideas apply for confidence intervals Anthony Davison: Bootstrap Methods and their Application, 45
46 Variance estimation Motivation Basic notions Confidence intervals Several samples Variance estimation Tests Regression Anthony Davison: Bootstrap Methods and their Application, 46
47 Variance estimation Variance estimation Variance estimate V needed for certain types of confidence interval (esp studentized bootstrap) Ways to compute this: double bootstrap delta method nonparametric delta method jackknife Anthony Davison: Bootstrap Methods and their Application, 47
48 Variance estimation Double bootstrap Bootstrap sample y1,,y n and corresponding estimate θ Take Q second-level bootstrap samples y1,,y n from y1,,y n, giving corresponding bootstrap estimates,, θ θ 1 Q, Compute variance estimate V as sample variance of θ 1,, θ Q Requires total R(Q + 1) resamples, so could be expensive Often reasonable to take Q = 50 for variance estimation, so need O( ) resamples nowadays not infeasible Anthony Davison: Bootstrap Methods and their Application, 48
49 Variance estimation Delta method Computation of variance formulae for functions of averages and other estimators Suppose ψ = g( θ) estimates ψ = g(θ), and θ N(θ,σ 2 /n) Then provided g (θ) 0, have (D2) E( ψ) = g(θ) + O(n 1 ) var( ψ) = σ 2 g (θ) 2 /n + O(n 3/2 ) Then var( ψ) = σ 2 g ( θ) 2 /n = V Example (D3): θ = Y, ψ = log θ Variance stabilisation (D4): if var( θ) = S(θ) 2 /n, find transformation h such that var{h( θ)} =constant Extends to multivariate estimators, and to ψ = g( θ 1,, θ d ) Anthony Davison: Bootstrap Methods and their Application, 49
50 Variance estimation Anthony Davison: Bootstrap Methods and their Application, 50
51 Variance estimation Nonparametric delta method Write parameter θ = t(f) as functional of distribution F General approximation: V = V L = 1 n n 2 L(Y j ;F) 2 L(y;F) is influence function value for θ for observation at y when distribution is F: t {(1 ε)f + εh y } t(f) L(y;F) = lim, ε 0 ε where H y puts unit mass at y Close link to robustness Empirical versions of L(y;F) and V L are j=1 l j = L(y j ; F), v L = n 2 l 2 j, usually obtained by analytic/numerical differentation Anthony Davison: Bootstrap Methods and their Application, 51
52 Variance estimation Computation of l j Write θ in weighted form, differentiate with respect to ε Sample average: Change weights: so (D5) θ = y = 1 n yj = w j y j wj 1/n w j ε + (1 ε) 1 n, w i (1 ε) 1 n, i j y y ε = εy j + (1 ε)y = ε(y j y) + y, giving l j = y j y and v L = 1 n 2 (yj y) 2 = n 1 n n 1 s 2 Interpretation: l j is standardized change in y when increase mass on y j Anthony Davison: Bootstrap Methods and their Application, 52
53 Variance estimation Nonparametric delta method: Ratio Population F(u, x) with y = (u, x) and θ = t(f) = xdf(u,x)/ udf(u,x), sample version is θ = t( F) = xd F(u,x)/ ud F(u,x) = x/u Then using chain rule of differentiation (D6), l j = (x j θu j )/u, giving v L = 1 n 2 ( x j θu j u ) 2 Anthony Davison: Bootstrap Methods and their Application, 53
54 Variance estimation Handedness data: Correlation coefficient Correlation coefficient may be written as a function of averages xu = n 1 x j u j etc: θ = xu xu { } 1/2, (x 2 x 2 )(u 2 u 2 ) from which empirical influence values l j can be derived In this example (and for others involving only averages), nonparametric delta method is equivalent to delta method Get v L = 0029 for comparison with v = 0043 obtained by bootstrapping v L typically underestimates var( θ) as here! Anthony Davison: Bootstrap Methods and their Application, 54
55 Variance estimation Delta methods: Comments Can be applied to many complex statistics Delta method variances often underestimate true variances: v L < var( θ) Can be applied automatically (numerical differentation) if algorithm for θ written in weighted form, eg x w = w j x j, w j 1/n for x and vary weights successively for j = 1,, n, setting w j = w i + ε, i j, wi = 1 for ε = 1/(100n) and using the definition as derivative Anthony Davison: Bootstrap Methods and their Application, 55
56 Variance estimation Jackknife Approximation to empirical influence values given by l j l jack,j = (n 1)( θ θ j ), where θ j is value of θ computed from sample y 1,,y j 1, y j+1,,y n Jackknife bias and variance estimates are b jack = 1 1 ( ljack,j, v jack = l 2 n n(n 1) jack,j nbjack) 2 Requires n + 1 calculations of θ Corresponds to numerical differentiation of θ, with ε = 1/(n 1) Anthony Davison: Bootstrap Methods and their Application, 56
57 Variance estimation Key points Several methods available for estimation of variances Needed for some types of confidence interval Most general method is double bootstrap: can be expensive Delta methods rely on linear expansion, can be applied numerically or analytically Jackknife gives approximation to delta method, can fail for rough statistics Anthony Davison: Bootstrap Methods and their Application, 57
58 Tests Motivation Basic notions Confidence intervals Several samples Variance estimation Tests Regression Anthony Davison: Bootstrap Methods and their Application, 58
59 Tests Ingredients Ingredients for testing problems: data y 1,, y n ; model M 0 to be tested; test statistic t = t(y 1,, y n ), with large values giving evidence against M 0, and observed value t obs P-value, p obs = Pr(T t obs M 0 ) measures evidence against M 0 small p obs indicates evidence against M 0 Difficulties: p obs may depend upon nuisance parameters, those of M 0 ; p obs often hard to calculate Anthony Davison: Bootstrap Methods and their Application, 59
60 Tests Examples Balsam-fir seedlings in 5 5 quadrats Poisson sample? Two-way layout: row-column independence? Anthony Davison: Bootstrap Methods and their Application, 60
61 Tests Estimation of p obs Estimate p obs by simulation from fitted null hypothesis model M 0 Algorithm: for r = 1,,R: simulate data set y 1,,y n from M 0 ; calculate test statistic t r from y 1,, y n Calculate simulation estimate p = 1 + #{t r t obs} 1 + R of p obs = Pr(T t obs M 0 ) Simulation and statstical errors: p p obs p obs Anthony Davison: Bootstrap Methods and their Application, 61
62 Tests Handedness data: Test of independence Are dnan and hand positively associated? Take T = θ (correlation coefficient), with t obs = 0509; this is large in case of positive association (one-sided test) Null hypothesis of independence: F(u,x) = F 1 (u)f 2 (x) Take bootstrap samples independently from F 1 (dnan 1,,dnan n ) and from F 2 (hand 1,,hand n ), then put them together to get bootstrap data (dnan 1,hand 1 ),,(dnan n,hand n) With R = 9,999 get 18 values of θ θ, so p = = : hand and dnan seem to be positively associated To test positive or negative association (two-sided test), take T = θ : gives p = 0004 Anthony Davison: Bootstrap Methods and their Application, 62
63 Tests Handedness data: Bootstrap from M 0 hand Probability density dnan Test statistic Anthony Davison: Bootstrap Methods and their Application, 63
64 Tests Choice of R Take R big enough to get small standard error for p, typically 100, using binomial calculation: ( ) 1 + #{t var( p) = var r t obs } 1 + R = 1 R 2 Rp obs(1 p obs ) = p obs(1 p obs ) R so if p obs = 005 need R 1900 for 10% relative error (D7) Can choose R sequentially: eg if p = 006 and R = 99, can augment R enough to diminish standard error Taking R too small lowers power of test Anthony Davison: Bootstrap Methods and their Application, 64
65 Tests Duality with confidence interval Often unclear how to impose null hypothesis on sampling scheme General approach based on duality between confidence interval I 1 α = (θ α, ) and test of null hypothesis θ = θ 0 Reject null hypothesis at level α in favour of alternative θ > θ 0, if θ 0 < θ α Handedness data: θ 0 = 0 I 095, but θ 0 = 0 I 099, so estimated significance level 001 < p < 005: weaker evidence than before Extends to tests of θ = θ 0 against other alternatives: if θ 0 I 1 α = (, θ α ), have evidence that θ < θ 0 if θ 0 I 1 2α = (θ α, θ α ), have evidence that θ θ 0 Anthony Davison: Bootstrap Methods and their Application, 65
66 Tests Pivot tests Equivalent to use of confidence intervals Idea: use (approximate) pivot such as Z = ( θ θ)/v 1/2 as statistic to test θ = θ 0 Observed value of pivot is z obs = ( θ θ 0 )/V 1/2 Significance level is ) ( θ θ Pr V 1/2 z obs M 0 = Pr(Z z obs M 0 ) = Pr(Z z obs F) = Pr(Z z obs F) Compare observed z obs with simulated distribution of Z = ( θ θ)/v 1/2, without needing to construct null hypothesis model M 0 Use of (approximate) pivot is essential for success Anthony Davison: Bootstrap Methods and their Application, 66
67 Tests Example: Handedness data Test zero correlation (θ 0 = 0), not independence; θ = 0509, V = : z obs = θ θ = = 299 V 1/ Observed significance level is p = 1 + #{z r z obs} 1 + R = = Probability density Test statistic Anthony Davison: Bootstrap Methods and their Application, 67
68 Tests Exact tests Problem: bootstrap estimate is p obs = Pr(T t obs M 0 ) Pr(T t M 0 ) = p obs, so estimate the wrong thing In some cases can eliminate parameters from null hypothesis distribution by conditioning on sufficient statistic Then simulate from conditional distribution More generally, can use Metropolis Hastings algorithm to simulate from conditional distribution (below) Anthony Davison: Bootstrap Methods and their Application, 68
69 Tests Example: Fir data Data Y 1,,Y n iid Pois(λ), with λ unknown Poisson model has E(Y ) = var(y ) = λ: base test of overdispersion on T = (Y j Y ) 2 /Y χ 2 n 1 ; observed value is t obs = 5515 Unconditional significance level: Pr(T t obs M 0,λ) Condition on value w of sufficient statistic W = Y j : p obs = Pr(T t obs M 0,W = w), independent of λ, owing to sufficiency of W Exact test: simulate from multinomial distribution of Y 1,,Y n given W = Y j = 107 Anthony Davison: Bootstrap Methods and their Application, 69
70 Tests Example: Fir data Figure: Simulation results for dispersion test Left panel: R = 999 values of the dispersion statistic t obtained under multinomial sampling: the data value is t obs = 5515 and p = 025 Right panel: chi-squared plot of ordered values of t, dotted line shows χ 2 49 approximation to null conditional distribution dispersion statistic t* t* chi-squared quantiles Anthony Davison: Bootstrap Methods and their Application, 70
71 Tests Handedness data: Permutation test Are dnan and hand related? Take T = θ (correlation coefficient) again Impose null hypothesis of independence: F(u,x) = F 1 (u)f 2 (x), but condition so that marginal distributions F 1 and F 2 are held fixed under resampling plan permutation test Take resamples of form (dnan 1,hand 1 ),,(dnan n,hand n ) where (1,,n ) is random permutation of (1,,n) Doing this with R = 9,999 gives one- and two-sided significance probabilities of 0002, 0003 Typically values of p very similar to those for corresponding bootstrap test Anthony Davison: Bootstrap Methods and their Application, 71
72 Tests Handedness data: Permutation resample hand Probability density dnan Test statistic Anthony Davison: Bootstrap Methods and their Application, 72
73 Tests Contingency table Are row and column classifications independent: Pr(row i, column j) = Pr(row i) Pr(column j)? Standard test statistic for independence is T = i,j (y ij ŷ ij ) 2 ŷ ij, ŷ ij = y i y j y Get Pr(χ ) = 0048, but is T χ 2 24? Anthony Davison: Bootstrap Methods and their Application, 73
74 Tests Exact tests: Contingency table For exact test, need to simulate distribution of T conditional on sufficient statistics row and column totals Algorithm (D8) for conditional simulation: 1 choose two rows j 1 < j 2 and two columns k 1 < k 2 at random 2 generate new values from hypergeometric distribution of y j1 k 1 conditional on margins of 2 2 table y j1 k 1 y j1 k 2 y j2 k 1 y j2 k 2 3 compute test statistic T every I = 100 iterations, say Compare observed value t obs = 3853 with simulated T get p = 008 Anthony Davison: Bootstrap Methods and their Application, 74
75 Tests Key points Tests can be performed using resampling/simulation Must take account of null hypothesis, by modifying sampling scheme to satisfy null hypothesis inverting confidence interval (pivot test) Can use Monte Carlo simulation to get approximations to exact tests simulate from null distribution of data, conditional on observed value of sufficient statistic Sometimes obtain permutation tests very similar to bootstrap tests Anthony Davison: Bootstrap Methods and their Application, 75
76 Regression Motivation Basic notions Confidence intervals Several samples Variance estimation Tests Regression Anthony Davison: Bootstrap Methods and their Application, 76
77 Regression Linear regression Independent data (x 1,y 1 ),, (x n,y n ) with y j = x T j β + ε j, ε j (0,σ 2 ) Least squares estimates β, leverages h j, residuals e j = y j x T j β (1 h j ) 1/2 (0,σ 2 ) Design matrix X is experimental ancillary should be held fixed if possible, as if model y = Xβ + ε correct var( β) = σ 2 (X T X) 1 Anthony Davison: Bootstrap Methods and their Application, 77
78 Regression Linear regression: Resampling schemes Two main resampling schemes Model-based resampling: y j = xt j β + ε j, ε j EDF(e 1 e,,e n e) Fixes design but not robust to model failure Assumes ε j sampled from population Case resampling: (x j,y j ) EDF{(x 1,y 1 ),,(x n,y n )} Varying design X but robust Assumes (x j, y j ) sampled from population Usually design variation no problem; can prove awkward in designed experiments and when design sensitive Anthony Davison: Bootstrap Methods and their Application, 78
79 Regression Cement data Table: Cement data: y is the heat (calories per gram of cement) evolved while samples of cement set The covariates are percentages by weight of four constituents, tricalciumaluminate x 1, tricalcium silicate x 2, tetracalcium alumino ferrite x 3 and dicalcium silicate x 4 x 1 x 2 x 3 x 4 y Anthony Davison: Bootstrap Methods and their Application, 79
80 Regression Cement data Fit linear model y = β 0 + β 1 x 1 + β 2 x 2 + β 3 x 3 + β 4 x 4 + ε and apply case resampling Covariates compositional: x x 4 = 100% so X almost collinear smallest eigenvalue of X T X is l 5 = Plot of β 1 against smallest eigenvalue of X T X reveals that var ( β 1 ) strongly variable Relevant subset for case resampling post-stratification of output based on l5? Anthony Davison: Bootstrap Methods and their Application, 80
81 Regression Cement data beta1hat* beta2hat* smallest eigenvalue smallest eigenvalue Anthony Davison: Bootstrap Methods and their Application, 81
82 Regression Cement data Table: Standard errors of linear regression coefficients for cement data Theoretical and error resampling assume homoscedasticity Resampling results use R = 999 samples, but last two rows are based only on those samples with the middle 500 and the largest 800 values of l 1 β 0 β1 β2 Normal theory Model-based resampling, R = Case resampling, all R = Case resampling, largest Anthony Davison: Bootstrap Methods and their Application, 82
83 Regression Survival data dose x survival % y Data on survival % for rats at different doses Linear model: log(survival) = β 0 + β 1 dose survival % log survival % dose dose Anthony Davison: Bootstrap Methods and their Application, 83
84 Regression Survival data Case resampling Replication of outlier: none (0), once (1), two or more ( ) Model-based sampling including residual would lead to change in intercept but not slope estimated slope sum of squares Anthony Davison: Bootstrap Methods and their Application, 84
85 Regression Generalized linear model Response may be binomial, Poisson, gamma, normal, y j mean µ j, variance φv (µ j ), where g(µ j ) = x T j β is linear predictor; g( ) is link function MLE β, fitted values µ j, Pearson residuals r Pj = Bootstrapped responses y j µ j {V ( µ j )(1 h j )} 1/2 (0,φ) yj = µ j + V ( µ j ) 1/2 ε j where ε j EDF(r P1 r P,,r Pn r P ) However possible that yj {0, 1, 2,, } r Pj not exchangeable, so may need stratified resampling Anthony Davison: Bootstrap Methods and their Application, 85
86 Regression AIDS data Log-linear model: number of reports in row j and column k follows Poisson distribution with mean Log link function and variance function Pearson residuals: µ jk = exp(α j + β k ) g(µ jk ) = log µ jk = α j + β k var(y jk ) = φ V (µ jk ) = 1 µ jk r jk = Model-based simulation: Y jk µ jk { µ jk (1 h jk )} 1/2 Y jk = µ jk + µ 1/2 jk ε jk Anthony Davison: Bootstrap Methods and their Application, 86
87 Regression Diagnosis Reporting-delay interval (quarters): Total period reports to end Year Quarter of Anthony Davison: Bootstrap Methods and their Application, 87
88 Regression AIDS data Poisson two-way model deviance 7165 on 413 df indicates strong overdispersion: φ > 1, so Poisson model implausible Residuals highly inhomogeneous exchangeability doubtful Diagnoses rp skewness Anthony Davison: Bootstrap Methods and their Application, 88
89 Regression AIDS data: Prediction intervals To estimate prediction error: simulate complete table y jk ; estimate parameters from incomplete y jk get estimated row totals and truth µ +,j = ebα j Prediction error k unobs e b β k, y +,j µ +,j µ 1/2 +,j studentized so more nearly pivotal y +,j = Form prediction intervals from R replicates k unobs y jk Anthony Davison: Bootstrap Methods and their Application, 89
90 Regression AIDS data: Resampling plans Resampling schemes: parametric simulation, fitted Poisson model parametric simulation, fitted negative binomial model nonparametric resampling of r P stratified nonparametric resampling of r P Stratification based on skewness of residuals, equivalent to stratifying original data by values of fitted means Take strata for which µ jk < 1, 1 µ jk < 2, µ jk 2 Anthony Davison: Bootstrap Methods and their Application, 90
91 Regression AIDS data: Results Deviance/df ratios for the sampling schemes, R = 999 Poisson variation inadequate 95% prediction limits poisson negative binomial deviance/df nonparametric deviance/df stratified nonparametric Diagnoses deviance/df deviance/df Anthony Davison: Bootstrap Methods and their Application, 91
92 Regression AIDS data: Semiparametric model More realistic: generalized additive model µ jk = exp {α(j) + β k }, where α(j) is locally-fitted smooth Same resampling plans as before 95% intervals now generally narrower and shifted upwards Diagnoses Diagnoses Anthony Davison: Bootstrap Methods and their Application, 92
93 Regression Key points Key assumption: independence of cases Two main resampling schemes for regression settings: Model-based Case resampling Intermediate schemes possible Can help to reduce dependence on assumptions needed for regression model These two basic approaches also used for more complex settings (time series, ), where data are dependent Anthony Davison: Bootstrap Methods and their Application, 93
94 Regression Summary Bootstrap: simulation methods for frequentist inference Useful when standard assumptions invalid (n small, data not normal, ); standard problem has non-standard twist; complex problem has no (reliable) theory; or (almost) anywhere else Have described basic ideas confidence intervals tests some approaches for regression Anthony Davison: Bootstrap Methods and their Application, 94
95 Regression Books Chernick (1999) Bootstrap Methods: A Practicioner s Guide Wiley Davison and Hinkley (1997) Bootstrap Methods and their Application Cambridge University Press Efron and Tibshirani (1993) An Introduction to the Bootstrap Chapman & Hall Hall (1992) The Bootstrap and Edgeworth Expansion Springer Lunneborg (2000) Data Analysis by Resampling: Concepts and Applications Duxbury Press Manly (1997) Randomisation, Bootstrap and Monte Carlo Methods in Biology Chapman & Hall Shao and Tu (1995) The Jackknife and Bootstrap Springer Anthony Davison: Bootstrap Methods and their Application, 95
Stat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationUNIVERSITÄT POTSDAM Institut für Mathematik
UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationSubject CS1 Actuarial Statistics 1 Core Principles
Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and
More information11. Bootstrap Methods
11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2
MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 1 Bootstrapped Bias and CIs Given a multiple regression model with mean and
More informationPreliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference
1 / 171 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2013 2 / 171 Unpaid advertisement
More informationConfidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods
Chapter 4 Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods 4.1 Introduction It is now explicable that ridge regression estimator (here we take ordinary ridge estimator (ORE)
More informationPreliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference
1 / 172 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2014 2 / 172 Unpaid advertisement
More informationThe Nonparametric Bootstrap
The Nonparametric Bootstrap The nonparametric bootstrap may involve inferences about a parameter, but we use a nonparametric procedure in approximating the parametric distribution using the ECDF. We use
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester
More information3 Joint Distributions 71
2.2.3 The Normal Distribution 54 2.2.4 The Beta Density 58 2.3 Functions of a Random Variable 58 2.4 Concluding Remarks 64 2.5 Problems 64 3 Joint Distributions 71 3.1 Introduction 71 3.2 Discrete Random
More informationEstimation of AUC from 0 to Infinity in Serial Sacrifice Designs
Estimation of AUC from 0 to Infinity in Serial Sacrifice Designs Martin J. Wolfsegger Department of Biostatistics, Baxter AG, Vienna, Austria Thomas Jaki Department of Statistics, University of South Carolina,
More informationThe Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University
The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions
More informationBetter Bootstrap Confidence Intervals
by Bradley Efron University of Washington, Department of Statistics April 12, 2012 An example Suppose we wish to make inference on some parameter θ T (F ) (e.g. θ = E F X ), based on data We might suppose
More informationResearch Article A Nonparametric Two-Sample Wald Test of Equality of Variances
Advances in Decision Sciences Volume 211, Article ID 74858, 8 pages doi:1.1155/211/74858 Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances David Allingham 1 andj.c.w.rayner
More informationResampling and the Bootstrap
Resampling and the Bootstrap Axel Benner Biostatistics, German Cancer Research Center INF 280, D-69120 Heidelberg benner@dkfz.de Resampling and the Bootstrap 2 Topics Estimation and Statistical Testing
More informationSCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models
SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION
More informationChapter 2: Resampling Maarten Jansen
Chapter 2: Resampling Maarten Jansen Randomization tests Randomized experiment random assignment of sample subjects to groups Example: medical experiment with control group n 1 subjects for true medicine,
More informationStatistical Inference: Estimation and Confidence Intervals Hypothesis Testing
Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire
More informationA nonparametric two-sample wald test of equality of variances
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 211 A nonparametric two-sample wald test of equality of variances David
More informationExam Applied Statistical Regression. Good Luck!
Dr. M. Dettling Summer 2011 Exam Applied Statistical Regression Approved: Tables: Note: Any written material, calculator (without communication facility). Attached. All tests have to be done at the 5%-level.
More informationReview of Statistics 101
Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationPRINCIPLES OF STATISTICAL INFERENCE
Advanced Series on Statistical Science & Applied Probability PRINCIPLES OF STATISTICAL INFERENCE from a Neo-Fisherian Perspective Luigi Pace Department of Statistics University ofudine, Italy Alessandra
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 15: Examples of hypothesis tests (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 32 The recipe 2 / 32 The hypothesis testing recipe In this lecture we repeatedly apply the
More informationSTAT440/840: Statistical Computing
First Prev Next Last STAT440/840: Statistical Computing Paul Marriott pmarriott@math.uwaterloo.ca MC 6096 February 2, 2005 Page 1 of 41 First Prev Next Last Page 2 of 41 Chapter 3: Data resampling: the
More informationFinite Population Correction Methods
Finite Population Correction Methods Moses Obiri May 5, 2017 Contents 1 Introduction 1 2 Normal-based Confidence Interval 2 3 Bootstrap Confidence Interval 3 4 Finite Population Bootstrap Sampling 5 4.1
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Principles of Statistical Inference Recap of statistical models Statistical inference (frequentist) Parametric vs. semiparametric
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........
More informationSTAT 4385 Topic 01: Introduction & Review
STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics
More informationBootstrap Confidence Intervals
Bootstrap Confidence Intervals Patrick Breheny September 18 Patrick Breheny STA 621: Nonparametric Statistics 1/22 Introduction Bootstrap confidence intervals So far, we have discussed the idea behind
More informationContents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1
Contents Preface to Second Edition Preface to First Edition Abbreviations xv xvii xix PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 1 The Role of Statistical Methods in Modern Industry and Services
More informationPolitical Science 236 Hypothesis Testing: Review and Bootstrapping
Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2018 Examinations Subject CT3 Probability and Mathematical Statistics Core Technical Syllabus 1 June 2017 Aim The
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationBootstrap, Jackknife and other resampling methods
Bootstrap, Jackknife and other resampling methods Part III: Parametric Bootstrap Rozenn Dahyot Room 128, Department of Statistics Trinity College Dublin, Ireland dahyot@mee.tcd.ie 2005 R. Dahyot (TCD)
More information36-463/663: Multilevel & Hierarchical Models
36-463/663: Multilevel & Hierarchical Models (P)review: in-class midterm Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 In-class midterm Closed book, closed notes, closed electronics (otherwise I have
More informationTheory and Methods of Statistical Inference
PhD School in Statistics cycle XXIX, 2014 Theory and Methods of Statistical Inference Instructors: B. Liseo, L. Pace, A. Salvan (course coordinator), N. Sartori, A. Tancredi, L. Ventura Syllabus Some prerequisites:
More informationHYPOTHESIS TESTING: FREQUENTIST APPROACH.
HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous
More informationNumerical Analysis for Statisticians
Kenneth Lange Numerical Analysis for Statisticians Springer Contents Preface v 1 Recurrence Relations 1 1.1 Introduction 1 1.2 Binomial CoefRcients 1 1.3 Number of Partitions of a Set 2 1.4 Horner's Method
More informationThe In-and-Out-of-Sample (IOS) Likelihood Ratio Test for Model Misspecification p.1/27
The In-and-Out-of-Sample (IOS) Likelihood Ratio Test for Model Misspecification Brett Presnell Dennis Boos Department of Statistics University of Florida and Department of Statistics North Carolina State
More informationBayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples
Bayesian inference for sample surveys Roderick Little Module : Bayesian models for simple random samples Superpopulation Modeling: Estimating parameters Various principles: least squares, method of moments,
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationExam details. Final Review Session. Things to Review
Exam details Final Review Session Short answer, similar to book problems Formulae and tables will be given You CAN use a calculator Date and Time: Dec. 7, 006, 1-1:30 pm Location: Osborne Centre, Unit
More informationCOPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition
Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15
More informationSupplementary Materials for Residuals and Diagnostics for Ordinal Regression Models: A Surrogate Approach
Supplementary Materials for Residuals and Diagnostics for Ordinal Regression Models: A Surrogate Approach Part A: Figures and tables Figure 2: An illustration of the sampling procedure to generate a surrogate
More informationA better way to bootstrap pairs
A better way to bootstrap pairs Emmanuel Flachaire GREQAM - Université de la Méditerranée CORE - Université Catholique de Louvain April 999 Abstract In this paper we are interested in heteroskedastic regression
More informationTheory and Methods of Statistical Inference. PART I Frequentist theory and methods
PhD School in Statistics cycle XXVI, 2011 Theory and Methods of Statistical Inference PART I Frequentist theory and methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution
More informationChapter 1 Statistical Inference
Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations
More informationConfidence Intervals, Testing and ANOVA Summary
Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0
More informationSTATISTICS SYLLABUS UNIT I
STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution
More informationBusiness Statistics. Lecture 10: Course Review
Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,
More informationUNIVERSITY OF TORONTO Faculty of Arts and Science
UNIVERSITY OF TORONTO Faculty of Arts and Science December 2013 Final Examination STA442H1F/2101HF Methods of Applied Statistics Jerry Brunner Duration - 3 hours Aids: Calculator Model(s): Any calculator
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationBootstrap (Part 3) Christof Seiler. Stanford University, Spring 2016, Stats 205
Bootstrap (Part 3) Christof Seiler Stanford University, Spring 2016, Stats 205 Overview So far we used three different bootstraps: Nonparametric bootstrap on the rows (e.g. regression, PCA with random
More informationOutline of GLMs. Definitions
Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More informationMultiple Linear Regression
Multiple Linear Regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html 1 / 42 Passenger car mileage Consider the carmpg dataset taken from
More informationIntroduction to Statistical Analysis
Introduction to Statistical Analysis Changyu Shen Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology Beth Israel Deaconess Medical Center Harvard Medical School Objectives Descriptive
More informationNew Bayesian methods for model comparison
Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison
More informationNonparametric Methods II
Nonparametric Methods II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 PART 3: Statistical Inference by
More informationPart 8: GLMs and Hierarchical LMs and GLMs
Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course
More information* Tuesday 17 January :30-16:30 (2 hours) Recored on ESSE3 General introduction to the course.
Name of the course Statistical methods and data analysis Audience The course is intended for students of the first or second year of the Graduate School in Materials Engineering. The aim of the course
More informationBootstrap Testing in Econometrics
Presented May 29, 1999 at the CEA Annual Meeting Bootstrap Testing in Econometrics James G MacKinnon Queen s University at Kingston Introduction: Economists routinely compute test statistics of which the
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationChapter 1 Likelihood-Based Inference and Finite-Sample Corrections: A Brief Overview
Chapter 1 Likelihood-Based Inference and Finite-Sample Corrections: A Brief Overview Abstract This chapter introduces the likelihood function and estimation by maximum likelihood. Some important properties
More informationSTAT 461/561- Assignments, Year 2015
STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and
More informationLectures on Simple Linear Regression Stat 431, Summer 2012
Lectures on Simple Linear Regression Stat 43, Summer 0 Hyunseung Kang July 6-8, 0 Last Updated: July 8, 0 :59PM Introduction Previously, we have been investigating various properties of the population
More informationTheory and Methods of Statistical Inference. PART I Frequentist likelihood methods
PhD School in Statistics XXV cycle, 2010 Theory and Methods of Statistical Inference PART I Frequentist likelihood methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution
More informationPractical Statistics
Practical Statistics Lecture 1 (Nov. 9): - Correlation - Hypothesis Testing Lecture 2 (Nov. 16): - Error Estimation - Bayesian Analysis - Rejecting Outliers Lecture 3 (Nov. 18) - Monte Carlo Modeling -
More informationA Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003.
A Note on Bootstraps and Robustness Tony Lancaster, Brown University, December 2003. In this note we consider several versions of the bootstrap and argue that it is helpful in explaining and thinking about
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationThe Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.
Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface
More informationGlossary. The ISI glossary of statistical terms provides definitions in a number of different languages:
Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationHeteroskedasticity-Robust Inference in Finite Samples
Heteroskedasticity-Robust Inference in Finite Samples Jerry Hausman and Christopher Palmer Massachusetts Institute of Technology December 011 Abstract Since the advent of heteroskedasticity-robust standard
More informationSupporting Information for Estimating restricted mean. treatment effects with stacked survival models
Supporting Information for Estimating restricted mean treatment effects with stacked survival models Andrew Wey, David Vock, John Connett, and Kyle Rudser Section 1 presents several extensions to the simulation
More informationPermutation Tests. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods
Permutation Tests Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods The Two-Sample Problem We observe two independent random samples: F z = z 1, z 2,, z n independently of
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationExact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring
Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme
More informationAccurate directional inference for vector parameters
Accurate directional inference for vector parameters Nancy Reid February 26, 2016 with Don Fraser, Nicola Sartori, Anthony Davison Nancy Reid Accurate directional inference for vector parameters York University
More information401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.
401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis
More informationStatistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation
Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence
More informationBayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence
Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns
More informationLecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011
Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector
More informationPractical Statistics for the Analytical Scientist Table of Contents
Practical Statistics for the Analytical Scientist Table of Contents Chapter 1 Introduction - Choosing the Correct Statistics 1.1 Introduction 1.2 Choosing the Right Statistical Procedures 1.2.1 Planning
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationST495: Survival Analysis: Hypothesis testing and confidence intervals
ST495: Survival Analysis: Hypothesis testing and confidence intervals Eric B. Laber Department of Statistics, North Carolina State University April 3, 2014 I remember that one fateful day when Coach took
More informationNow consider the case where E(Y) = µ = Xβ and V (Y) = σ 2 G, where G is diagonal, but unknown.
Weighting We have seen that if E(Y) = Xβ and V (Y) = σ 2 G, where G is known, the model can be rewritten as a linear model. This is known as generalized least squares or, if G is diagonal, with trace(g)
More informationGeneralized linear models
Generalized linear models Douglas Bates November 01, 2010 Contents 1 Definition 1 2 Links 2 3 Estimating parameters 5 4 Example 6 5 Model building 8 6 Conclusions 8 7 Summary 9 1 Generalized Linear Models
More informationG-ESTIMATION OF STRUCTURAL NESTED MODELS (CHAPTER 14) BIOS G-Estimation
G-ESTIMATION OF STRUCTURAL NESTED MODELS (CHAPTER 14) BIOS 776 1 14 G-Estimation ( G-Estimation of Structural Nested Models 14) Outline 14.1 The causal question revisited 14.2 Exchangeability revisited
More informationˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T.
Exam 3 Review Suppose that X i = x =(x 1,, x k ) T is observed and that Y i X i = x i independent Binomial(n i,π(x i )) for i =1,, N where ˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T x) This is called the
More informationMultilevel Statistical Models: 3 rd edition, 2003 Contents
Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction
More informationPart 6: Multivariate Normal and Linear Models
Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of
More informationGENERALIZED LINEAR MIXED MODELS AND MEASUREMENT ERROR. Raymond J. Carroll: Texas A&M University
GENERALIZED LINEAR MIXED MODELS AND MEASUREMENT ERROR Raymond J. Carroll: Texas A&M University Naisyin Wang: Xihong Lin: Roberto Gutierrez: Texas A&M University University of Michigan Southern Methodist
More informationUnit 10: Simple Linear Regression and Correlation
Unit 10: Simple Linear Regression and Correlation Statistics 571: Statistical Methods Ramón V. León 6/28/2004 Unit 10 - Stat 571 - Ramón V. León 1 Introductory Remarks Regression analysis is a method for
More informationREFERENCES AND FURTHER STUDIES
REFERENCES AND FURTHER STUDIES by..0. on /0/. For personal use only.. Afifi, A. A., and Azen, S. P. (), Statistical Analysis A Computer Oriented Approach, Academic Press, New York.. Alvarez, A. R., Welter,
More informationStatistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,
More information