A Simple, Graphical Procedure for Comparing Multiple Treatment Effects

Size: px
Start display at page:

Download "A Simple, Graphical Procedure for Comparing Multiple Treatment Effects"

Transcription

1 A Simple, Graphical Procedure for Comparing Multiple Treatment Effects Brennan S. Thompson and Matthew D. Webb May 15, 2015 <<< PRELIMINARY AND INCOMPLETE >>> Abstract In this paper, we utilize a new graphical procedure to show how multiple treatment effects can be compared while controlling the familywise error rate (the probability of finding one or more spurious differences between the parameters of interest). Monte Carlo simulations suggest that this procedure adequately controls the familywise error rate in finite samples, and has average power nearly identical to a simple max-t procedure. We illustrate our proposed approach using data from a field experiment on different types of performance pay for teachers. Keywords: multiple comparisons; familywise error rate; treatment effects; bootstrap Department of Economics, Ryerson University. brennan@ryerson.ca Department of Economics, University of Calgary. matthewdwebb@gmail.com 1

2 1 Introduction In the case of comparing multiple treatments, the problem at hand is two-fold: (A) we want to know whether or not the effect of each treatment is different from zero, and (B) we want to know whether or not the effect of each treatment is different from that of any of the other treatments. In order to make the discussion of our problem more concrete, consider the following regression model: Y t = β 0 C t + k β i T i,t + Z tδ + U t, t = 1,..., n, (1.1) i=1 where C t equals one if individual t belongs to the control group and zero otherwise, T i,t equals one if individual t belongs to treatment group i {1,..., k} and zero otherwise, Z t is a vector of other characteristics for individual t, and U t is an idiosyncratic error term for individual t. The (average marginal) treatment effect of the ith treatment is defined as α i β i β 0, for i {1,..., k}. Thus, the first part of our problem involves testing α i = 0, for each i {1,..., k}, (1.2) while the second part of our problem involves testing α i = α j, for each unique (i, j) {1,..., k} {1,..., k}. (1.3) Note that, since α i = 0 is equivalent to β i = β 0, and α i = α j is equivalent to 2

3 β i = β j, our problem boils down to testing β i = β j, for each unique (i, j) K K, (1.4) where K = {0,..., k}. Thus, our problem can be seen to involve making a total of ( ) ( k+1 2 comparisons: the k comparisons implicit in (1.2), plus the k ) 2 comparisons implicit in (1.3). For example, with k = 2 treatments, we must make 3 comparisons: the comparison of the 2 treatment effects to zero, and the comparison between the 2 treatment effects. With k = 3 treatments, we must make 6 comparisons, and so on. It is well-known that, when conducting more than one hypothesis test at a given nominal level simultaneously, the probability of rejecting at least one true hypothesis (i.e., the familywise error rate) is often well in excess of that given nominal level. To illustrate the severity of this issue, we generate, for k {2,..., 5}, one million samples of size n = 100(k + 1) from the model in (1.1) as follows. We set β 0 = = β k = 0, and assign 100 observations to the control group (i.e., n t=1 C t = 100) and 100 observations to each of the k treatment groups (i.e., n t=1 T i,t = 100 for each i {1,..., k}). For each t, Z t = 0 and U t is an independent standard normal draw. Within each sample, we independently test (A) each of the k restrictions in (1.2), and (B) each of the ( k 2) restrictions in (1.3), using conventional t-tests at the 5% nominal level. The rejection frequencies for these tests are shown in Figure 1. Specifically, the dash-dotted line shows the frequency of rejecting at least one of the k restrictions in (1.2), while the dashed line shows the frequency of rejecting at least one of the ( ) k 2 restrictions in (1.3). The solid line shows the empirical familywise error rate (i.e., the frequency of rejecting at least one the ( ) k+1 2 restrictions in (1.4)). Note that, even with k = 2, the empirical familywise error rate is approximately 0.122; with k = 5, the empirical familywise error rate is approximately

4 Figure 1: Empirical Familywise Error Rates for Independent t-tests In recognition of this issue, a wide variety of multiple testing procedures, such as max-t procedures (see, e.g., Romano and Wolf, 2005, and Section 3 below) have been developed to control the familywise error rate (or other generalized error rates, such as the false discovery rate; see Benjamini and Hochberg, 1995). For the specific problem of making multiple pairwise comparisons that we are interested in here, Bennett and Thompson (2015), hereafter BT, have recently proposed a graphical procedure that identifies differences between a pair of parameters through the non-overlap of the so-called uncertainty intervals (see Section 2) for those parameters. 1 This graphical method, which can be seen as a resampling-based generalization of Tukey s (1953) method, is appealing because it offers users more than a 0-1 ( Yes- No ) decision regarding differences between parameters. Indeed, this method allows users to determine both statistical and practical significance of pairwise differences, while also providing them with a measure of uncertainty concerning the locations of 1 Note that BT denote the total number of parameters by k, while the total number of parameters we consider is k

5 the individual parameters. The remainder of this paper is organized as follows. In Section 2, we provide a brief overview of the procedure of BT. Section 3 describes the results of a set of Monte Carlo simulations designed to examine the finite-sample performance of this procedure in the current context. In Section 4, we present the results of an empirical example in which the different treatments are different types of performance pay for teachers using data from a field experiment in India. 2 The Overlap Procedure In this section, we briefly summarize how the graphical procedure of BT can be applied to the problem at hand. For complete details, including proofs of the main results, see BT. The parameters of interest, β i = β i (P ), i K, are unknown but are presumed to be consistently estimable from observable data generated from some (unknown) probability mechanism P. That is, for each i K, we have available to us a n- consistent estimator ˆβ n,i of β i. The overlap procedure presents each parameter estimate ˆβ n,i together with its corresponding uncertainty interval, C n,i (γ) = [ ˆβn,i ± γ se ( ˆβn,i )], (2.1) ( ) whose length is determined by the parameter γ (discussed below) and se ˆβn,i, the standard error of ˆβ n,i. In what follows, we denote the lower and upper endpoints of the interval C n,i by L n,i and U n,i, respectively. The uncertainty intervals can then be used to make inferences about the ordering 5

6 of the parameters of interest as follows: We infer that β i < β j if ˆβ n,i < ˆβ n,j and the uncertainty intervals for β i and β j are non-overlapping. Note that, as a function of γ, the probability of declaring at least one significant difference between parameters, when all k parameters are equal, is as follows: [ ] n Q n (γ; P ) = Prob P max {L n,i(γ) U n,j (γ)} > 0. (2.2) i,j K Thus, in order to control the familywise error rate at nominal level α, the ideal choice of γ when all k parameters are equal is γ n (α) = inf {γ : Q n (γ; P ) α}. (2.3) Unfortunately, because P is unknown, we cannot compute Q n (γ; P ), and γ n (α) is thus infeasible. We therefore turn our attention to estimating γ n (α) so as to achieve at least asymptotic control of the familywise error rate. Constructing a feasible counterpart to γ n (α) of course requires that we first formulate an empirical estimator of Q n (γ; P ). Towards this end, let ˆP n (typically the empirical distribution) be an estimate of P. The idea is to estimate Q n (γ; P ) using its bootstrap analogue [ n Q n (γ; ˆP { n ) = Prob ˆPn max L n,i (γ) Un,j(γ) } ] > 0, (2.4) i,j K where L n,i(γ) and U n,i(γ) are, respectively, the lower and upper endpoints of C n,i(γ) = [ ( )] β n,i ± γ se β n,i, (2.5) with β n,i being the estimate of β i obtained by resampling from (1.1) with the restric- 6

7 tion imposed that β 0 = = β k. This, in turn, leads naturally to an estimator of γ as γn(α) = inf {γ : Q n (γ; ˆP ) n } α. (2.6) Plugging γn(α) into C n,i (γ) gives rise to a simple graphical device for visualizing statistically and practically significant differences. BT show that, under quite general conditions, this method (i) controls the familywise error rate asymptotically, and (ii) is consistent, in the sense that any (true) differences between parameters are inferred with probability one asymptotically. 3 Simulation Evidence We now examine the finite-sample performance of the overlap procedure described above by way of several Monte Carlo experiments. As in BT, we consider a max-t procedure designed to control the familywise error rate at the nominal level α as a benchmark. Specifically, this procedure rejects the restriction that β i = β j whenever T n,(i,j) = ˆβ n,i ˆβ n,j [ ( )] 2 [ ( )]. 2 se ˆβn,i + se ˆβn,j is greater than 1 α quantile of max i j T n,(i,j), where T n,(i,j) = β n,i β n,j [ ( )] 2 [ ( )], 2 se βn,i + se βn,j 7

8 n Homoskedasticity Heteroskedasticity Overlap max-t Overlap max-t Table 1: Empirical familywise error rates for overlap and max-t procedures and, as above, β n,i is the estimate of β i obtained by resampling from (1.1) with the restriction imposed that β 0 = = β k. In what follows, we use 199 i.i.d. bootstrap replications for both the max-t procedure and the overlap procedure. The design of our simulations is the same as the one described in Section 1, but with several variations. First, we fix k = 5 and generate 10,000 samples of size n, with n {300, 600, 1200} (i.e., when n = 300, there are 50 observations in the control group and 50 observations in each of the 5 treatment groups, and so on). Second, we set β i = θ(i + 1), with θ {0, 0.01, 0.02,..., 1}, which allows us to examine both control of the familywise error rate (when θ = 0) and power (when θ > 0). Finally, we consider two different specifications for the error term distribution: A homoskedastic case in which all of the errors are drawn from the standard normal distribution, and a heteroskedastic case in which the errors for observations assigned to the control group are standard normal, while the observations assigned to treatment group i {1,..., k} are normal with mean zero and variance i Table 1 shows that control of the familywise error rate for both the overlap procedure and the max-t procedure is quite close to the nominal level α = 0.05 at all of the sample sizes considered in both the homoskedastic and heteroskedastic cases. Interestingly, although the differences are quite small, the rejection rates for the overlap procedure are uniformly smaller than the rejection rates for the max-t procedure. 2 In both cases, we estimate the parameters using ordinary least squares and obtain heteroskedasticity-consistent standard errors using the method of White (1980). 8

9 (a) Homoskedastic case (b) Heteroskedastic case Figure 2: Average power for overlap and max-t procedures In order to compare the power of the two procedures, we follow BT in examining average power, which is the proportion of false restrictions that are rejected. Figures 2a and 2b display average power as a function of θ in the homoskedastic and heteroskedastic cases, respectively. Within these figures, black lines correspond to the overlap procedure and red lines correspond to the max-t procedure, while lines that are solid, dashed, and dotted correspond to n = 300, n = 600, and n = 1200, respectively. Evidently, both procedures have nearly identical average power over θ and at all of the sample sizes considered in both the homoskedastic and heteroskedastic cases. 4 Empirical Example As an illustration of the procedure discussed above, we re-visit a field experiment on teacher performance pay in India conducted by Muralidharan and Sundararaman 9

10 (2011), hereafter MS. Specifically, MS run two sets of experiments where they offer incentive pay to teachers which they condition on student s scores. In the analysis, they have outcomes from three separate groups of schools: a control group (Control), a group in which teachers were paid based on the scores of their own students (Individual Incentive), and a group in which teachers were paid based on the performance of all students at their school (Group Incentive). The experiment is quite well designed and readers can find additional details in MS. Among other things, MS test the impact of the two interventions on combined math and language (Telegu) scores. The experiment ran for two years. In the majority of the analysis, year two scores are analyzed often controlling for the year zero, or pretreatment, scores. While most of the paper is about the average impact of the interventions, in Table 8 the authors compare the impact of the group incentive to the impact of the individual incentive. To make these comparisons they first estimate the following model: 49 Score2 t = α 0 + α 1 Individual t + α 2 Group t + δ j County j,t + δ 50 Score0 t + U t, (4.1) where Score2 and Score0 are, respectively, the combined math and language score in year 2 and year 0, Group is an indicator variable for being in the group incentive, Individual is an indicator for being in the individual incentive, and {County j } 49 j=1 is a set of indicator variables for all but one of the 50 counties (Mandals) in which the experiment was conducted. The standard errors are clustered by school. Next, MS independently test the following three restrictions: j=1 MS1: α 1 = 0 MS2: α 2 = 0 10

11 MS3: α 1 = α 2 The T -statistics obtained for the tests of these three restrictions are 4.84, 2.71, and -1.91, respectively. Thus, MS conclude that both incentives have significantly positive treatment effects, and that the two treatment effects are statistically different from one another. We now consider an application of the overlap procedure discussed above. To do so, we first need to modify the estimating equation so that the mean of the control group is estimated as a parameter, rather than being absorbed into the intercept. Specifically, we estimate the following equation: 49 Score2 t = β 0 Control t + β 1 Individual t + β 2 Group t + δ j County j,t + δ 50 Score0 t + U t, j=1 (4.2) where Control is an indicator variable for membership in the control group. Notice that α 0 = β 0 and and α i = β i β 0, for i {1, 2}. Using 199 i.i.d. bootstrap replications, we obtain γn(0.05) = 0.505, and produce Figure 3, which can be interpreted as follows: Since C n,0 and C n,1 do not overlap, we can reject β 0 = β 1, or equivalently, α 1 = 0 (MS1). Since C n,0 and C n,2 overlap, we can not reject β 0 = β 2, or equivalently, α 2 = 0 (MS2). Since C n,1 and C n,2 overlap, we can not reject β 1 = β 2, or equivalently, α 1 = α 2 (MS3). Thus, while our conclusions on the first restriction (MS1) is the same as MS, our conclusion on the second and third restrictions (MS2 and MS3) are different. 11

12 Figure 3: Uncertainty Intervals for Empirical Example We also consider an alternative method of visualizing these results in Figure 4, where all figures are expressed in marginal terms. That is, we subtract ˆβ n,0 = from the point estimates and the endpoints of the uncertainty intervals. In addition, we no longer display a point estimate or uncertainty interval for β 0, but instead include a dotted horizontal line at the the value of U n,0 ˆβ n,0, the upper endpoint of the re-centered version of C n,0. 12

13 Figure 4: Re-Centered Uncertainty Intervals for Empirical Example 13

14 References Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):pp Bennett, C. J. and Thompson, B. S. (2015). Graphical procedures for multiple comparisons under general dependence. Unpublished manuscript. Muralidharan, K. and Sundararaman, V. (2011). Teacher Performance Pay: Experimental Evidence from India. Journal of Political Economy, 119(1): Romano, J. P. and Wolf, M. (2005). Exact and approximate stepdown methods for multiple hypothesis testing. Journal of the American Statistical Association, 100(469): Tukey, J. W. (1953). The problem of multiple comparisons. In The Collected Works of John W. Tukey VIII. Multiple Comparisons: , pages Chapman and Hall, New York. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48(4):

A Simple, Graphical Procedure for Comparing. Multiple Treatment Effects

A Simple, Graphical Procedure for Comparing. Multiple Treatment Effects A Simple, Graphical Procedure for Comparing Multiple Treatment Effects Brennan S. Thompson and Matthew D. Webb October 26, 2015 > Abstract In this paper, we utilize a new

More information

A Simple, Graphical Procedure for Comparing. Multiple Treatment Effects

A Simple, Graphical Procedure for Comparing. Multiple Treatment Effects A Simple, Graphical Procedure for Comparing Multiple Treatment Effects Brennan S. Thompson and Matthew D. Webb February 6, 2016 > Abstract In this paper, we utilize a new

More information

Graphical Procedures for Multiple Comparisons Under General Dependence

Graphical Procedures for Multiple Comparisons Under General Dependence Graphical Procedures for Multiple Comparisons Under General Dependence Christopher J. Bennett a and Brennan S. Thompson b March 7, 2014 Abstract It has been more than half a century since Tukey first introduced

More information

Resampling-Based Control of the FDR

Resampling-Based Control of the FDR Resampling-Based Control of the FDR Joseph P. Romano 1 Azeem S. Shaikh 2 and Michael Wolf 3 1 Departments of Economics and Statistics Stanford University 2 Department of Economics University of Chicago

More information

Christopher J. Bennett

Christopher J. Bennett P- VALUE ADJUSTMENTS FOR ASYMPTOTIC CONTROL OF THE GENERALIZED FAMILYWISE ERROR RATE by Christopher J. Bennett Working Paper No. 09-W05 April 2009 DEPARTMENT OF ECONOMICS VANDERBILT UNIVERSITY NASHVILLE,

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Control of Generalized Error Rates in Multiple Testing

Control of Generalized Error Rates in Multiple Testing Institute for Empirical Research in Economics University of Zurich Working Paper Series ISSN 1424-0459 Working Paper No. 245 Control of Generalized Error Rates in Multiple Testing Joseph P. Romano and

More information

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos

More information

Multiple Testing of One-Sided Hypotheses: Combining Bonferroni and the Bootstrap

Multiple Testing of One-Sided Hypotheses: Combining Bonferroni and the Bootstrap University of Zurich Department of Economics Working Paper Series ISSN 1664-7041 (print) ISSN 1664-705X (online) Working Paper No. 254 Multiple Testing of One-Sided Hypotheses: Combining Bonferroni and

More information

Heteroskedasticity-Robust Inference in Finite Samples

Heteroskedasticity-Robust Inference in Finite Samples Heteroskedasticity-Robust Inference in Finite Samples Jerry Hausman and Christopher Palmer Massachusetts Institute of Technology December 011 Abstract Since the advent of heteroskedasticity-robust standard

More information

large number of i.i.d. observations from P. For concreteness, suppose

large number of i.i.d. observations from P. For concreteness, suppose 1 Subsampling Suppose X i, i = 1,..., n is an i.i.d. sequence of random variables with distribution P. Let θ(p ) be some real-valued parameter of interest, and let ˆθ n = ˆθ n (X 1,..., X n ) be some estimate

More information

Performance Evaluation and Comparison

Performance Evaluation and Comparison Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Cross Validation and Resampling 3 Interval Estimation

More information

A Course on Advanced Econometrics

A Course on Advanced Econometrics A Course on Advanced Econometrics Yongmiao Hong The Ernest S. Liu Professor of Economics & International Studies Cornell University Course Introduction: Modern economies are full of uncertainties and risk.

More information

The Number of Bootstrap Replicates in Bootstrap Dickey-Fuller Unit Root Tests

The Number of Bootstrap Replicates in Bootstrap Dickey-Fuller Unit Root Tests Working Paper 2013:8 Department of Statistics The Number of Bootstrap Replicates in Bootstrap Dickey-Fuller Unit Root Tests Jianxin Wei Working Paper 2013:8 June 2013 Department of Statistics Uppsala

More information

A better way to bootstrap pairs

A better way to bootstrap pairs A better way to bootstrap pairs Emmanuel Flachaire GREQAM - Université de la Méditerranée CORE - Université Catholique de Louvain April 999 Abstract In this paper we are interested in heteroskedastic regression

More information

Obtaining Critical Values for Test of Markov Regime Switching

Obtaining Critical Values for Test of Markov Regime Switching University of California, Santa Barbara From the SelectedWorks of Douglas G. Steigerwald November 1, 01 Obtaining Critical Values for Test of Markov Regime Switching Douglas G Steigerwald, University of

More information

Robust Performance Hypothesis Testing with the Variance. Institute for Empirical Research in Economics University of Zurich

Robust Performance Hypothesis Testing with the Variance. Institute for Empirical Research in Economics University of Zurich Institute for Empirical Research in Economics University of Zurich Working Paper Series ISSN 1424-0459 Working Paper No. 516 Robust Performance Hypothesis Testing with the Variance Olivier Ledoit and Michael

More information

A multiple testing procedure for input variable selection in neural networks

A multiple testing procedure for input variable selection in neural networks A multiple testing procedure for input variable selection in neural networks MicheleLaRoccaandCiraPerna Department of Economics and Statistics - University of Salerno Via Ponte Don Melillo, 84084, Fisciano

More information

Casuality and Programme Evaluation

Casuality and Programme Evaluation Casuality and Programme Evaluation Lecture V: Difference-in-Differences II Dr Martin Karlsson University of Duisburg-Essen Summer Semester 2017 M Karlsson (University of Duisburg-Essen) Casuality and Programme

More information

Multiple comparisons of slopes of regression lines. Jolanta Wojnar, Wojciech Zieliński

Multiple comparisons of slopes of regression lines. Jolanta Wojnar, Wojciech Zieliński Multiple comparisons of slopes of regression lines Jolanta Wojnar, Wojciech Zieliński Institute of Statistics and Econometrics University of Rzeszów ul Ćwiklińskiej 2, 35-61 Rzeszów e-mail: jwojnar@univrzeszowpl

More information

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Monte Carlo Studies. The response in a Monte Carlo study is a random variable. Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating

More information

Improving linear quantile regression for

Improving linear quantile regression for Improving linear quantile regression for replicated data arxiv:1901.0369v1 [stat.ap] 16 Jan 2019 Kaushik Jana 1 and Debasis Sengupta 2 1 Imperial College London, UK 2 Indian Statistical Institute, Kolkata,

More information

Bootstrap Tests: How Many Bootstraps?

Bootstrap Tests: How Many Bootstraps? Bootstrap Tests: How Many Bootstraps? Russell Davidson James G. MacKinnon GREQAM Department of Economics Centre de la Vieille Charité Queen s University 2 rue de la Charité Kingston, Ontario, Canada 13002

More information

Applying the Benjamini Hochberg procedure to a set of generalized p-values

Applying the Benjamini Hochberg procedure to a set of generalized p-values U.U.D.M. Report 20:22 Applying the Benjamini Hochberg procedure to a set of generalized p-values Fredrik Jonsson Department of Mathematics Uppsala University Applying the Benjamini Hochberg procedure

More information

A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter

A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Patrik Guggenberger Department

More information

Exact and Approximate Stepdown Methods For Multiple Hypothesis Testing

Exact and Approximate Stepdown Methods For Multiple Hypothesis Testing Exact and Approximate Stepdown Methods For Multiple Hypothesis Testing Joseph P. Romano Department of Statistics Stanford University Michael Wolf Department of Economics and Business Universitat Pompeu

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Rejoinder on: Control of the false discovery rate under dependence using the bootstrap and subsampling

Rejoinder on: Control of the false discovery rate under dependence using the bootstrap and subsampling Test (2008) 17: 461 471 DOI 10.1007/s11749-008-0134-6 DISCUSSION Rejoinder on: Control of the false discovery rate under dependence using the bootstrap and subsampling Joseph P. Romano Azeem M. Shaikh

More information

Summary and discussion of: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing

Summary and discussion of: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing Summary and discussion of: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing Statistics Journal Club, 36-825 Beau Dabbs and Philipp Burckhardt 9-19-2014 1 Paper

More information

Lecture 30. DATA 8 Summer Regression Inference

Lecture 30. DATA 8 Summer Regression Inference DATA 8 Summer 2018 Lecture 30 Regression Inference Slides created by John DeNero (denero@berkeley.edu) and Ani Adhikari (adhikari@berkeley.edu) Contributions by Fahad Kamran (fhdkmrn@berkeley.edu) and

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

ECON3150/4150 Spring 2015

ECON3150/4150 Spring 2015 ECON3150/4150 Spring 2015 Lecture 3&4 - The linear regression model Siv-Elisabeth Skjelbred University of Oslo January 29, 2015 1 / 67 Chapter 4 in S&W Section 17.1 in S&W (extended OLS assumptions) 2

More information

On block bootstrapping areal data Introduction

On block bootstrapping areal data Introduction On block bootstrapping areal data Nicholas Nagle Department of Geography University of Colorado UCB 260 Boulder, CO 80309-0260 Telephone: 303-492-4794 Email: nicholas.nagle@colorado.edu Introduction Inference

More information

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald Andrews and Patrik Guggenberger

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald Andrews and Patrik Guggenberger A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER BY Donald Andrews and Patrik Guggenberger COWLES FOUNDATION PAPER NO. 1453 COWLES FOUNDATION FOR RESEARCH IN

More information

Lecture 28. Ingo Ruczinski. December 3, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University

Lecture 28. Ingo Ruczinski. December 3, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University Lecture 28 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University December 3, 2015 1 2 3 4 5 1 Familywise error rates 2 procedure 3 Performance of with multiple

More information

PROCEDURES CONTROLLING THE k-fdr USING. BIVARIATE DISTRIBUTIONS OF THE NULL p-values. Sanat K. Sarkar and Wenge Guo

PROCEDURES CONTROLLING THE k-fdr USING. BIVARIATE DISTRIBUTIONS OF THE NULL p-values. Sanat K. Sarkar and Wenge Guo PROCEDURES CONTROLLING THE k-fdr USING BIVARIATE DISTRIBUTIONS OF THE NULL p-values Sanat K. Sarkar and Wenge Guo Temple University and National Institute of Environmental Health Sciences Abstract: Procedures

More information

Hypothesis Testing Based on the Maximum of Two Statistics from Weighted and Unweighted Estimating Equations

Hypothesis Testing Based on the Maximum of Two Statistics from Weighted and Unweighted Estimating Equations Hypothesis Testing Based on the Maximum of Two Statistics from Weighted and Unweighted Estimating Equations Takeshi Emura and Hisayuki Tsukuma Abstract For testing the regression parameter in multivariate

More information

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

Confidence intervals for kernel density estimation

Confidence intervals for kernel density estimation Stata User Group - 9th UK meeting - 19/20 May 2003 Confidence intervals for kernel density estimation Carlo Fiorio c.fiorio@lse.ac.uk London School of Economics and STICERD Stata User Group - 9th UK meeting

More information

Inference for Identifiable Parameters in Partially Identified Econometric Models

Inference for Identifiable Parameters in Partially Identified Econometric Models Inference for Identifiable Parameters in Partially Identified Econometric Models Joseph P. Romano Department of Statistics Stanford University romano@stat.stanford.edu Azeem M. Shaikh Department of Economics

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 1 Bootstrapped Bias and CIs Given a multiple regression model with mean and

More information

Bootstrapping the Grainger Causality Test With Integrated Data

Bootstrapping the Grainger Causality Test With Integrated Data Bootstrapping the Grainger Causality Test With Integrated Data Richard Ti n University of Reading July 26, 2006 Abstract A Monte-carlo experiment is conducted to investigate the small sample performance

More information

Non-Parametric Dependent Data Bootstrap for Conditional Moment Models

Non-Parametric Dependent Data Bootstrap for Conditional Moment Models Non-Parametric Dependent Data Bootstrap for Conditional Moment Models Bruce E. Hansen University of Wisconsin September 1999 Preliminary and Incomplete Abstract A new non-parametric bootstrap is introduced

More information

Impact of serial correlation structures on random effect misspecification with the linear mixed model.

Impact of serial correlation structures on random effect misspecification with the linear mixed model. Impact of serial correlation structures on random effect misspecification with the linear mixed model. Brandon LeBeau University of Iowa file:///c:/users/bleb/onedrive%20 %20University%20of%20Iowa%201/JournalArticlesInProgress/Diss/Study2/Pres/pres.html#(2)

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

Control of the False Discovery Rate under Dependence using the Bootstrap and Subsampling

Control of the False Discovery Rate under Dependence using the Bootstrap and Subsampling Institute for Empirical Research in Economics University of Zurich Working Paper Series ISSN 1424-0459 Working Paper No. 337 Control of the False Discovery Rate under Dependence using the Bootstrap and

More information

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances Advances in Decision Sciences Volume 211, Article ID 74858, 8 pages doi:1.1155/211/74858 Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances David Allingham 1 andj.c.w.rayner

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Controlling Bayes Directional False Discovery Rate in Random Effects Model 1

Controlling Bayes Directional False Discovery Rate in Random Effects Model 1 Controlling Bayes Directional False Discovery Rate in Random Effects Model 1 Sanat K. Sarkar a, Tianhui Zhou b a Temple University, Philadelphia, PA 19122, USA b Wyeth Pharmaceuticals, Collegeville, PA

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

A semi-bayesian study of Duncan's Bayesian multiple

A semi-bayesian study of Duncan's Bayesian multiple A semi-bayesian study of Duncan's Bayesian multiple comparison procedure Juliet Popper Shaer, University of California, Department of Statistics, 367 Evans Hall # 3860, Berkeley, CA 94704-3860, USA February

More information

Statistical Applications in Genetics and Molecular Biology

Statistical Applications in Genetics and Molecular Biology Statistical Applications in Genetics and Molecular Biology Volume 5, Issue 1 2006 Article 28 A Two-Step Multiple Comparison Procedure for a Large Number of Tests and Multiple Treatments Hongmei Jiang Rebecca

More information

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference 1 / 171 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2013 2 / 171 Unpaid advertisement

More information

Bootstrapping, Randomization, 2B-PLS

Bootstrapping, Randomization, 2B-PLS Bootstrapping, Randomization, 2B-PLS Statistics, Tests, and Bootstrapping Statistic a measure that summarizes some feature of a set of data (e.g., mean, standard deviation, skew, coefficient of variation,

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Quantitative Empirical Methods Exam

Quantitative Empirical Methods Exam Quantitative Empirical Methods Exam Yale Department of Political Science, August 2016 You have seven hours to complete the exam. This exam consists of three parts. Back up your assertions with mathematics

More information

The Nonparametric Bootstrap

The Nonparametric Bootstrap The Nonparametric Bootstrap The nonparametric bootstrap may involve inferences about a parameter, but we use a nonparametric procedure in approximating the parametric distribution using the ECDF. We use

More information

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods Chapter 4 Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods 4.1 Introduction It is now explicable that ridge regression estimator (here we take ordinary ridge estimator (ORE)

More information

A Practitioner s Guide to Cluster-Robust Inference

A Practitioner s Guide to Cluster-Robust Inference A Practitioner s Guide to Cluster-Robust Inference A. C. Cameron and D. L. Miller presented by Federico Curci March 4, 2015 Cameron Miller Cluster Clinic II March 4, 2015 1 / 20 In the previous episode

More information

STAT 461/561- Assignments, Year 2015

STAT 461/561- Assignments, Year 2015 STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and

More information

Estimation of the Conditional Variance in Paired Experiments

Estimation of the Conditional Variance in Paired Experiments Estimation of the Conditional Variance in Paired Experiments Alberto Abadie & Guido W. Imbens Harvard University and BER June 008 Abstract In paired randomized experiments units are grouped in pairs, often

More information

Multiple Testing. Hoang Tran. Department of Statistics, Florida State University

Multiple Testing. Hoang Tran. Department of Statistics, Florida State University Multiple Testing Hoang Tran Department of Statistics, Florida State University Large-Scale Testing Examples: Microarray data: testing differences in gene expression between two traits/conditions Microbiome

More information

Final Exam November 24, Problem-1: Consider random walk with drift plus a linear time trend: ( t

Final Exam November 24, Problem-1: Consider random walk with drift plus a linear time trend: ( t Problem-1: Consider random walk with drift plus a linear time trend: y t = c + y t 1 + δ t + ϵ t, (1) where {ϵ t } is white noise with E[ϵ 2 t ] = σ 2 >, and y is a non-stochastic initial value. (a) Show

More information

Comprehensive Examination Quantitative Methods Spring, 2018

Comprehensive Examination Quantitative Methods Spring, 2018 Comprehensive Examination Quantitative Methods Spring, 2018 Instruction: This exam consists of three parts. You are required to answer all the questions in all the parts. 1 Grading policy: 1. Each part

More information

FDR-CONTROLLING STEPWISE PROCEDURES AND THEIR FALSE NEGATIVES RATES

FDR-CONTROLLING STEPWISE PROCEDURES AND THEIR FALSE NEGATIVES RATES FDR-CONTROLLING STEPWISE PROCEDURES AND THEIR FALSE NEGATIVES RATES Sanat K. Sarkar a a Department of Statistics, Temple University, Speakman Hall (006-00), Philadelphia, PA 19122, USA Abstract The concept

More information

Inference via Kernel Smoothing of Bootstrap P Values

Inference via Kernel Smoothing of Bootstrap P Values Queen s Economics Department Working Paper No. 1054 Inference via Kernel Smoothing of Bootstrap P Values Jeff Racine McMaster University James G. MacKinnon Queen s University Department of Economics Queen

More information

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

One-Way ANOVA. Some examples of when ANOVA would be appropriate include: One-Way ANOVA 1. Purpose Analysis of variance (ANOVA) is used when one wishes to determine whether two or more groups (e.g., classes A, B, and C) differ on some outcome of interest (e.g., an achievement

More information

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Political Science 236 Hypothesis Testing: Review and Bootstrapping Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The

More information

The Simple Linear Regression Model

The Simple Linear Regression Model The Simple Linear Regression Model Lesson 3 Ryan Safner 1 1 Department of Economics Hood College ECON 480 - Econometrics Fall 2017 Ryan Safner (Hood College) ECON 480 - Lesson 3 Fall 2017 1 / 77 Bivariate

More information

A Bootstrap Test for Causality with Endogenous Lag Length Choice. - theory and application in finance

A Bootstrap Test for Causality with Endogenous Lag Length Choice. - theory and application in finance CESIS Electronic Working Paper Series Paper No. 223 A Bootstrap Test for Causality with Endogenous Lag Length Choice - theory and application in finance R. Scott Hacker and Abdulnasser Hatemi-J April 200

More information

Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee

Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee Lecture - 04 Basic Statistics Part-1 (Refer Slide Time: 00:33)

More information

Multiple Comparison Methods for Means

Multiple Comparison Methods for Means SIAM REVIEW Vol. 44, No. 2, pp. 259 278 c 2002 Society for Industrial and Applied Mathematics Multiple Comparison Methods for Means John A. Rafter Martha L. Abell James P. Braselton Abstract. Multiple

More information

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf Lecture 13: 2011 Bootstrap ) R n x n, θ P)) = τ n ˆθn θ P) Example: ˆθn = X n, τ n = n, θ = EX = µ P) ˆθ = min X n, τ n = n, θ P) = sup{x : F x) 0} ) Define: J n P), the distribution of τ n ˆθ n θ P) under

More information

Lecture 27. December 13, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.

Lecture 27. December 13, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Testing for a break in persistence under long-range dependencies and mean shifts

Testing for a break in persistence under long-range dependencies and mean shifts Testing for a break in persistence under long-range dependencies and mean shifts Philipp Sibbertsen and Juliane Willert Institute of Statistics, Faculty of Economics and Management Leibniz Universität

More information

where x i and u i are iid N (0; 1) random variates and are mutually independent, ff =0; and fi =1. ff(x i )=fl0 + fl1x i with fl0 =1. We examine the e

where x i and u i are iid N (0; 1) random variates and are mutually independent, ff =0; and fi =1. ff(x i )=fl0 + fl1x i with fl0 =1. We examine the e Inference on the Quantile Regression Process Electronic Appendix Roger Koenker and Zhijie Xiao 1 Asymptotic Critical Values Like many other Kolmogorov-Smirnov type tests (see, e.g. Andrews (1993)), the

More information

Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap

Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap Emmanuel Flachaire To cite this version: Emmanuel Flachaire. Bootstrapping heteroskedastic regression models: wild bootstrap

More information

University of California San Diego and Stanford University and

University of California San Diego and Stanford University and First International Workshop on Functional and Operatorial Statistics. Toulouse, June 19-21, 2008 K-sample Subsampling Dimitris N. olitis andjoseph.romano University of California San Diego and Stanford

More information

review session gov 2000 gov 2000 () review session 1 / 38

review session gov 2000 gov 2000 () review session 1 / 38 review session gov 2000 gov 2000 () review session 1 / 38 Overview Random Variables and Probability Univariate Statistics Bivariate Statistics Multivariate Statistics Causal Inference gov 2000 () review

More information

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION VICTOR CHERNOZHUKOV CHRISTIAN HANSEN MICHAEL JANSSON Abstract. We consider asymptotic and finite-sample confidence bounds in instrumental

More information

Avoiding data snooping in multilevel and mixed effects models

Avoiding data snooping in multilevel and mixed effects models J. R. Statist. Soc. A (2007) 170, Part 4, pp. 1035 1059 Avoiding data snooping in multilevel and mixed effects models David Afshartous University of Miami, Coral Gables, USA and Michael Wolf University

More information

Bootstrap Testing in Econometrics

Bootstrap Testing in Econometrics Presented May 29, 1999 at the CEA Annual Meeting Bootstrap Testing in Econometrics James G MacKinnon Queen s University at Kingston Introduction: Economists routinely compute test statistics of which the

More information

Overview. Overview. Overview. Specific Examples. General Examples. Bivariate Regression & Correlation

Overview. Overview. Overview. Specific Examples. General Examples. Bivariate Regression & Correlation Bivariate Regression & Correlation Overview The Scatter Diagram Two Examples: Education & Prestige Correlation Coefficient Bivariate Linear Regression Line SPSS Output Interpretation Covariance ou already

More information

Improving Weighted Least Squares Inference

Improving Weighted Least Squares Inference University of Zurich Department of Economics Working Paper Series ISSN 664-704 (print ISSN 664-705X (online Working Paper No. 232 Improving Weighted Least Squares Inference Cyrus J. DiCiccio, Joseph P.

More information

A Course in Applied Econometrics Lecture 7: Cluster Sampling. Jeff Wooldridge IRP Lectures, UW Madison, August 2008

A Course in Applied Econometrics Lecture 7: Cluster Sampling. Jeff Wooldridge IRP Lectures, UW Madison, August 2008 A Course in Applied Econometrics Lecture 7: Cluster Sampling Jeff Wooldridge IRP Lectures, UW Madison, August 2008 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of roups and

More information

Models, Testing, and Correction of Heteroskedasticity. James L. Powell Department of Economics University of California, Berkeley

Models, Testing, and Correction of Heteroskedasticity. James L. Powell Department of Economics University of California, Berkeley Models, Testing, and Correction of Heteroskedasticity James L. Powell Department of Economics University of California, Berkeley Aitken s GLS and Weighted LS The Generalized Classical Regression Model

More information

Two examples of the use of fuzzy set theory in statistics. Glen Meeden University of Minnesota.

Two examples of the use of fuzzy set theory in statistics. Glen Meeden University of Minnesota. Two examples of the use of fuzzy set theory in statistics Glen Meeden University of Minnesota http://www.stat.umn.edu/~glen/talks 1 Fuzzy set theory Fuzzy set theory was introduced by Zadeh in (1965) as

More information

Analysis of Fast Input Selection: Application in Time Series Prediction

Analysis of Fast Input Selection: Application in Time Series Prediction Analysis of Fast Input Selection: Application in Time Series Prediction Jarkko Tikka, Amaury Lendasse, and Jaakko Hollmén Helsinki University of Technology, Laboratory of Computer and Information Science,

More information

STAT 263/363: Experimental Design Winter 2016/17. Lecture 1 January 9. Why perform Design of Experiments (DOE)? There are at least two reasons:

STAT 263/363: Experimental Design Winter 2016/17. Lecture 1 January 9. Why perform Design of Experiments (DOE)? There are at least two reasons: STAT 263/363: Experimental Design Winter 206/7 Lecture January 9 Lecturer: Minyong Lee Scribe: Zachary del Rosario. Design of Experiments Why perform Design of Experiments (DOE)? There are at least two

More information

Econometrics Summary Algebraic and Statistical Preliminaries

Econometrics Summary Algebraic and Statistical Preliminaries Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L

More information

Contest Quiz 3. Question Sheet. In this quiz we will review concepts of linear regression covered in lecture 2.

Contest Quiz 3. Question Sheet. In this quiz we will review concepts of linear regression covered in lecture 2. Updated: November 17, 2011 Lecturer: Thilo Klein Contact: tk375@cam.ac.uk Contest Quiz 3 Question Sheet In this quiz we will review concepts of linear regression covered in lecture 2. NOTE: Please round

More information

Gravity Models, PPML Estimation and the Bias of the Robust Standard Errors

Gravity Models, PPML Estimation and the Bias of the Robust Standard Errors Gravity Models, PPML Estimation and the Bias of the Robust Standard Errors Michael Pfaffermayr August 23, 2018 Abstract In gravity models with exporter and importer dummies the robust standard errors of

More information

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald W.K. Andrews and Patrik Guggenberger

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald W.K. Andrews and Patrik Guggenberger A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER By Donald W.K. Andrews and Patrik Guggenberger August 2011 Revised December 2012 COWLES FOUNDATION DISCUSSION

More information

A nonparametric two-sample wald test of equality of variances

A nonparametric two-sample wald test of equality of variances University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 211 A nonparametric two-sample wald test of equality of variances David

More information

Tobit and Interval Censored Regression Model

Tobit and Interval Censored Regression Model Global Journal of Pure and Applied Mathematics. ISSN 0973-768 Volume 2, Number (206), pp. 98-994 Research India Publications http://www.ripublication.com Tobit and Interval Censored Regression Model Raidani

More information

Inference on Optimal Treatment Assignments

Inference on Optimal Treatment Assignments Inference on Optimal Treatment Assignments Timothy B. Armstrong Yale University Shu Shen University of California, Davis April 23, 2014 Abstract We consider inference on optimal treatment assignments.

More information

Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis

Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis MPRA Munich Personal RePEc Archive Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis Muhammad Irfan Malik and Atiq-ur- Rehman International Institute of Islamic Economics, International

More information

Case Study in the Use of Bayesian Hierarchical Modeling and Simulation for Design and Analysis of a Clinical Trial

Case Study in the Use of Bayesian Hierarchical Modeling and Simulation for Design and Analysis of a Clinical Trial Case Study in the Use of Bayesian Hierarchical Modeling and Simulation for Design and Analysis of a Clinical Trial William R. Gillespie Pharsight Corporation Cary, North Carolina, USA PAGE 2003 Verona,

More information

BANA 7046 Data Mining I Lecture 2. Linear Regression, Model Assessment, and Cross-validation 1

BANA 7046 Data Mining I Lecture 2. Linear Regression, Model Assessment, and Cross-validation 1 BANA 7046 Data Mining I Lecture 2. Linear Regression, Model Assessment, and Cross-validation 1 Shaobo Li University of Cincinnati 1 Partially based on Hastie, et al. (2009) ESL, and James, et al. (2013)

More information