Exact Nonparametric Inference for a Binary. Endogenous Regressor

Size: px
Start display at page:

Download "Exact Nonparametric Inference for a Binary. Endogenous Regressor"

Transcription

1 Exact Nonparametric Inference for a Binary Endogenous Regressor Brigham R. Frandsen December 5, 2013 Abstract This paper describes a randomization-based estimation and inference procedure for the distribution or quantiles of potential outcomes with a binary treatment and instrument. The method imposes no parametric model for the treatment effect, and remains valid for small n, a weak instrument, or inference on tail quantiles, when conventional large-sample methods break down. The method is illustrated using simulations and data from a randomized trial of college student incentives and services. 1 Introduction Instrumental variables estimation and inference can be unreliable, particularly when instruments are weak, samples are small, or when analyzing effects on the tails of outcome distributions. In these settings, the asymptotic approximations undergirding estimation and inference break down, and estimates can be substantially biased and confidence intervals misleading. This paper develops an exact, finite-sample approach to instrumental variables estimation and inference that remains valid for weak instruments, small samples, tails of the outcome distribution, and other settings where large-sample approximations are poor. The approach 1

2 imposes no parametric model for causal effects, and makes no distributional assumptions on the outcome variable. The paper considers a treatment effects setting where the goal is to estimate the effects of a possibly endogenous binary regressor using an instrumental variable. This setting, while specific, is widespread in empirical work and has received much attention in the theoretical literature (Ashenfelter, 1978; Angrist, 1990; Imbens and Angrist, 1994;?; Hahn, Todd, and van der Klaauw, 2001; Huber and Mellace, 2011). The approach adopts the standard treatment effects assumptions made in Angrist, Imbens, and Rubin s (1996) large-sample framework, and derives randomization-based estimation and inference procedures that remain valid when large-sample approximations break down. The heart of the estimation and inference approach, and the main theoretical result of this paper is the exact joint distribution of observed order statistics in an instrumental variables treatment effects setting. The distribution depends on the underlying quantiles of potential outcomes, which are the estimands, and thus serves as the basis for maximum likelihood estimation of the quantiles of potential outcomes. The distribution also implies critical values for the observed statistics that can be used to test hypotheses and construct exact confidence intervals for the quantiles of potential outcomes. The quantiles or, equivalently, the distribution of potential outcomes in turn can be used to characterize other parameters of interest, such as quantile treatment effects or local average treatment effects. In addition to the theoretical results, the paper illustrates the proposed method using simulations and an application based on a randomized trial of college student services and incentives analyzed by Angrist, Lang, and Oreopoulos (2009). The simulations compare the performance of conventional large-sample instrumental variables methods to the proposed exact methods and show that even in scenarios where conventional point estimates are substantially biased and conventional confidence intervals have severe under-coverage, the exact methods perform reliably. This paper complements previous work on instrumental variables estimation and infer- 2

3 ence in settings where conventional asymptotic approximations may be poor. Bound, Jaeger, and Baker (1995) and a large subsequent literature established that weak instruments can lead to bias and incorrect inference, and proposed estimators and inference procedures based on alternative large-sample approximations. Imbens and Rosenbaum (2005) considered a finite-sample approach to weak instruments, proposing permutation inference and estimation procedures based on tests of constant treatment effects models in an instrumental variables randomization inference framework similar to Rosenbaum (2002) and Greevy, Silber, Cnaan, and Rosenbaum (2004). The current paper builds on these strands of the literature by developing exact procedures but without imposing parametric models for the effects of treatment. This paper also relates to previous work on tail quantile estimation and quantile estimation under endogeneity. Chernozhukov (2005) and Chernozhukov and Fernández-Val (2011) showed that conventional asymptotic approximations are poor when estimating tail quantiles, and proposed alternative large-sample inference procedures for the effects of an exogenous regressor. The inference procedure in this paper is also valid for estimating extreme quantiles and builds on that work by allowing for the regressor to be endogenous and remaining valid for small samples. Chernozhukov and Hansen (2005) introduced an instrumental variables model for quantile regression and Chernozhukov, Hansen, and Jansson (2009) developed finite-sample inference procedures for the model under a restriction that a unit s rank is invariant across potential outcomes. Frölich and Melly (2013) develop estimation and inference procedures for treatment effects under endogeneity that does not require this assumption, but in a large-sample setting. The approach here builds on these papers by not requiring rank invariance, similar to Frölich and Melly (2013), but developing exact finite-sample results, as in Chernozhukov, Hansen, and Jansson (2009). The approach described in this paper can be viewed as an alternative method, or as a check on the assumptions invoked in large-sample approaches. The focus in this paper on quantiles or distributions captures potentially interesting heterogeneity across the distribu- 3

4 tion of outcomes, and may lead to inference that is more robust with respect to outliers than inference on averages (Koenker and Bassett, 1978; Koenker, 2005). Since the approach makes no parametric assumptions on the treatment effect, it may be used as a tool to motivate specific functional form assumptions, which would then allow more powerful inference if those assumptions hold. 2 Econometric Framework Consider a setting where the goal is to make inferences about the effects of a binary treatment on a scalar outcome or response variable. The treatment may be endogenous in that treatment status is not under the control of the researcher, and may therefore be correlated with unobserved determinants of the outcome. A binary instrumental variable is available, however, which will allow inference on the effects of treatment for the subgroup of units whose treatment status is determined by the instrument. The econometric framework adopted here closely follows Angrist, Imbens, and Rubin (1996), and the assumptions are equivalent to those described there. The key difference is that the estimation and inference procedures proposed here hold exactly in finite samples. Let n be the number of observed units, and let Z be the random n-vector of observations on the binary instrumental variable. Following the standard treatment effects terminology borrowed from clinical trials, Z will be referred to as treatment assignment (to be distinguished from treatment status). Let d (z) be the n-vector of treatment states that would be observed if the assignment vector were Z = z. If some units do not comply with their treatment assignment d (z) will differ from z. Let y (z, d) be the vector of responses that would be observed if the assignment vector were z and the treatment states d. Observed data therefore consist of the vectors Z, D = d (Z), and Y = y (Z, d (Z)). In this framework, the functions y (z, d) and d (z) are fixed characteristics of the n units in the experiment, and all variability in the observed data is due to the random vector of treatment assignments Z. 4

5 Let z i denote the i-th element of the vector z, and likewise for other vectors. This paper makes the following assumptions: A1: Random Assignment of the Instrument. Z is randomly assigned; that is, Pr (Z = z) = Pr (Z = z) for all z and z such that ι z = ι z where ι is an n-vector of ones. A2: Stable Unit Treatment Value Assumption (SUTVA). a. If z i = z i, then d i (z) = d i ( z). b. If z i = z i and d i = d i, then y i (z, d) = y i ( z, d ). Assumption A1 means treatment assignment is random and does not depend on potential outcomes. This assumption is identical to the random assignment assumption in Angrist, Imbens, and Rubin (1996), and is equivalent to the standard independence assumption in conventional instrumental variables frameworks. Assumption A2 means unit i s response depends only on its own treatment assignment and treatment status; changing other units assignment and status has no effect. assumption is identical to assumption 1 in Angrist, Imbens, and Rubin (1996). This Under SUTVA we can write y i (z, d) and d i (z) as y i (z i, d i ) and d i (z i ), respectively. Were treatment assignment and treatment status equivalent, we could further simplify y i (z i, d i ) = y i (d i ), and inference on treatment effects would proceed with no further assumptions. In the case of imperfect compliance, however, three additional assumptions are needed: A3: Exclusion Restriction. y (z, d) = y ( z, d) for all z, z and for all d. Assumption A3 means the treatment assignment has no effect on outcomes other than through treatment status. This assumption is identical to the exclusion restriction in Angrist, Imbens, and Rubin (1996). Under this assumption, we can write potential responses in terms of d only: y (z, d) = y (d), and if we then combine it with SUTVA (assumption A2), we can write y i (d) = y i (d i ). Assumption A3 restricts how treatment assignment affects outcomes. The next assumption restricts how treatment assignment affects treatment status: 5

6 A4: Monotonicity. d i (1) d i (0) for all i {1,..., n}. Assumption A4 means treatment assignment may affect treatment status for some units and not others, but where it does have an effect, it is only in one direction. In other words, treatment assignment never induces a unit not to take the treatment. This assumption is identical to the monotonicity assumption in Angrist, Imbens, and Rubin (1996). In some designs this assumption holds automatically. For example, if non-compliance is one-sided, where the treatment is not available to units assigned to the control group, or where units in the treatment group can be forced to be exposed to the treatment, A4 automatically holds. The monotonicity assumption allows us to categorize each of the n units into one of three groups always-takers, never-takers, and compliers depending on potential treatment status, d i (z). Define the categories as: at = {i : d i (1) = d i (0) = 1}, nt = {i : d i (1) = d i (0) = 0}, c = {i : d i (1) > d i (0)}. Always-takers are those units who take the treatment no matter how they are assigned. Never-takers do not take the treatment no matter how they are assigned. Compliers take the treatment if they are assigned to, but not otherwise. Let the number of units in these groups be n at, n nt, and n c, with n at + n nt + n c = n. The final assumption says that the number of compliers is greater than zero: A5: Non-zero compliance. n c > 0. Assumption A5 means the instrument affects treatment status for at least one unit, and it is necessary for the parameters of interest (described in the next paragraph) to be well defined. This assumption is equivalent to assumption 4 in Angrist, Imbens, and Rubin (1996). Assumption A5 is a testable assumption, and the inference procedure proposed in this paper includes a test of this assumption as an intermediate step. 6

7 The parameters of interest in this paper are the quantiles of compliers potential outcomes: Q 0 c (τ) = inf { x : F 0 c (x) τ }, Q 1 c (τ) = inf { x : F 1 c (x) τ }, where F d c (x) = 1 n c n c i 1 {y i (d) x}, d {0, 1} i=1 denotes the cdf of compliers potential outcomes, and c i is in indicator for unit i being a complier. The focus is on compliers outcomes because they are the units whose treatment status is actually affected by the instrument, and without making further assumptions, inferences can be drawn on the effects of treatment only for them, a point made by Imbens and Angrist (1994) and Angrist, Imbens, and Rubin (1996) in a large-sample instrumental variables framework. The quantiles of potential outcomes, or alternatively the cdf, summarize what can be learned from the data about how the treatment affects units responses. Distribution functions and quantiles are often of interest in their own right, but they are also the basis for inference on other objects, such as quantile treatment effects (i.e., the difference between, say, the τ-quantile of potential outcomes), average treatment effects, or tests of stochastic dominance. 3 Estimation and Inference In the treatment effects framework under the assumptions described in the previous section, the distribution of compliers potential outcomes can be expressed as a function of directly estimable quantities, as shown by Imbens and Rubin (1997): Fc 1 (x) = n at + n c Fat c 1 (c) n at Fat 1 (x), n c n c Fc 0 (x) = n nt + n c Fnt c 0 (c) n nt Fnt 0 (x), n c n c (1a) (1b) 7

8 where, for example, Fat c 1 is the cdf of y i (1) among the union of always-takers and compliers, and Fat, 1 Fnt c, 0 and Fnt 0 are defined similarly. Under the assumptions, the observed data are directly informative about the quantities on the right hand side of equations (1). For example, the units in the observed cell {Z i = 1, D i = 1} are a random sample from at c, and thus the number of units in this cell is informative about n at +n c and the distribution of observed outcomes in this cell is informative about F 1 at c. Likewise, the cell {Z i = 0, D i = 1} is a random sample from at, the cell {Z i = 0, D i = 0} is a random sample from nt c, and the cell {Z i = 1, D i = 0} is a random sample from nt, and the observed distributions of outcomes in those cells are informative about Fat, 1 Fnt c, 0 and Fnt. 0 This observation drives the identification of compliers distributions in Imbens and Rubin s (1997) large-sample setting, and it also underlies the exact, finite-sample results here. In a nutshell, the exact joint distribution of order statistics or, inversely, cumulative frequencies in the observed (Z i, D i ) cells can be written in terms of the compliers distribution of potential outcomes, and thus can be used for maximum likelihood estimation and exact finite-sample inference. The following theorem establishes this formally for y i (1). Theorem 1 Suppose assumptions A1 though A5 hold. Then the joint distribution of the observed frequencies P Z (1 D), Q (1 Z) D, I (x) i (1 Z i) D i 1 (Y i x), and J (x) i Z id i 1 (Y i x) conditional on i Z i = m is given by Pr {P = p, Q = q, I (x) = i, J (x) = j} = ( n nat n c p )( k 1 at (x) )( n at k kat 1 at 1 (x) )( kc 1(x) )( (x) i n at kat 1 (x) (q i) j (kat 1 (x) i) n c kc 1(x) ) m p j (n at kat 1 (x) (q i)) ( n m), where k 1 at n at F 1 at and k 1 c n c F 1 c are the cumulative frequencies of y i (1) among alwaystakers and compliers. Proof. Consider an urn containing n total balls of five distinct types: never-takers, of which there are n n at n c by assumption A4; always-takers with y i (1) x, of which 8

9 there are k 1 at (x); always-takers with y i (1) > x, of which there are n at k 1 at (x); compliers with y i (1) x, of which there are k 1 c (x); and compliers with y i (1) > x, of which there are n c k 1 c (x), noting that by assumptions A2 and A3, potential outcomes {y i (1)} are well defined. Assigning m units to Z i = 1 is equivalent to drawing m balls at random out of the urn without replacement, and by assumption A1 each of the ( n m) ways of doing so has equal probability. Noting that by assumption A4, P is the number of never-takers drawn, Q is the total number of always-takers (with y i (1) above or below x) not drawn, I (x) is the number of always-takers with y i (1) x not drawn, and J (x) is the combined number of always-takers and compliers with y i (1) x drawn, then the event {P = p, Q = q, I (x) = i, J (x) = j} is equivalent to drawing p never-takers, k 1 at (x) i always-takers with y i (1) x, n at k 1 at (x) (q i) always-takers with y i (1) > x, j (k 1 at (x) i) compliers with y i (1) x, and m p j (n at k 1 at (x) (q i)) compliers with y i (1) > x. The probability of this event is therefore given by the multivariate hypergeometric probability function in the result. The theorem provides the basis for estimation and inference on the compliers quantiles of y i (1). The analogous result for y i (0) can be obtained by substituting 1 Z for Z, 1 D for D, and n m for m and exchanging always-takers and never-takers. The estimation and inference procedures below will be described for y i (1), but the corresponding procedures for y i (0) are analogous. 3.1 Maximum Likelihood Estimation The τ-quantile of y i (1) among compliers, Q 1 c (τ), is by definition the n c τ -th order statistic of {y i (1)} i c and therefore satisfies kc 1 (Q 1 c (τ)) = n c τ, where kc 1 ( ) n c Fc 1 ( ) is the cumulative frequency for compliers potential outcomes. The maximum likelihood estimate for 9

10 Q 1 c (τ) is therefore the smallest value ˆx that solves the following maximization problem: {( )( n nat n c max x,n c,n at,kat 1 P ( )( n c τ J (x) (kat 1 I (x)) k 1 at k 1 at I (x) )( ) n at kat 1 n at kat 1 (Q I (x)) n c n c τ m P J (x) (n at k 1 at (Q I (x))) )}. The appendix gives an algorithm for computing this maximum, which is aided by the fact that the objective function changes only at observed outcome values in the {Z i = 1} cell, and that the nuisance parameters (n c, n at, kat) 1 are integers bounded above and below by functions of the observed data. 3.2 Exact Inference The theorem provides the basis for exact hypothesis tests and confidence intervals for the quantiles of compliers potential outcomes. Under the joint hypothesis H 0 : Q 1 c (τ) = x 0, n c = n c0, n at = n at0, kat 1 (x 0 ) = kat0, 1 the theorem implies that the exact probability mass function of the joint test statistic (P, Q, I (x 0 ), J (x 0 )) is Pr {P = p, Q = q, I (x 0 ) = i, J (x 0 ) = j} = )( k 1 at0 n at0 kat0 1 nc0 τ ( n nat0 n c0 p )( )( )( kat0 1 i n at0 kat0 1 (q i) j (kat0 1 i) n c n cτ ) m p j (n at0 kat0 1 (q i)) ( n m). A test of significance level α can be performed by comparing the observed values of (P, Q, I (x 0 ), J (x 0 )) to critical values chosen so that the rejection region has probability calculated under this distribution of at most α. The appendix gives an algorithm for choosing critical values from this distribution. The joint hypothesis test will have a known size no matter how small the sample, and so in this sense is exact. Tests about Q 1 c (τ) alone, however, involve projecting over the nuisance parameters (n c, n at, kat), 1 and may be conservative. A test of the simple hypothesis H simple : Q 1 c (τ) = x 0 may be performed by comparing the observed values of (P, Q, I (x 0 ), J (x 0 )) to 10

11 critical values chosen so that the maximum probability of the rejection region is at most α, where the maximum is taken over joint hypotheses of the form H joint : Q 1 c (τ) = x 0, n c = n c0, n at = n at0, kat 1 (x 0 ) = kat0 1 for all feasible values of (n c0, n at0, kat0). 1 This test s size is guaranteed to be at most α, but it may be smaller. Confidence sets may be formed by inverting hypothesis tests. That is, a confidence set with coverage of at least 1 α can be constructed as all values that are not rejected at significance level α. Joint confidence sets for (Q 1 c (τ), n c, n at, k 1 at (x 0 )) will have a known coverage, but confidence sets for Q 1 c (τ) alone may have coverage higher than 1 α because of projecting over the nuisance parameters. 4 Simulations This section illustrates the performance of the randomization inference procedure and compares it to conventional large-sample methods. The setting is a simulated experiment with n = 30 units, out of whom m = 15 are assigned to treatment. The compliance rate in this simulated setup is about 75 percent, with n c = 23 compliers, n nt = 7 never-takers, and no always-takers. The setting corresponds to a modest-sized experiment where no unit would be able to obtain treatment if assigned to the control, but a few would fail to take the treatment even if assigned to the treatment arm. Many clinical trials and field experiments exhibit this kind of one-sided noncompliance, including the example in the next section. In the simulated experiment, the treatment has no effect, so y i (0) = y i (1) for all i = 1,..., n. Outcomes were drawn independently from a standard normal distribution for compliers and from N ( 3, 1) for never-takers. The simulation consists of 5,000 draws of the assignment vector, Z, an n-vector with m ones and n m zeros. The nominal coverage rate for group size inference is chosen to be at least.9375, and the nominal coverage rate for inference on quantiles given group sizes is chosen to be at least.75/.9375, for an overall nominal coverage rate of at least

12 In this example there is strong selection out of treatment: units who fail to take the treatment have lower potential outcomes on average than those who take the treatment. Thus simple comparisons by treatment status do not have a causal interpretation, underscoring the importance of using the randomized assignment Z as the basis for inference. The simulation setup corresponds roughly to Imbens and Rosenbaum s (2005) parametric setup with significant correlation between potential outcomes and potential treatment status, which they use to illustrate a situation in which conventional large-sample based methods perform poorly, but randomization-based methods work well. The simulation results below show the same results hold in the nonparametric setting considered here. Figures 1 and 2 illustrate the output of the randomization inference procedure for one draw from the simulation model. The first-step group size inference is shown in Figure 1, which plots a percent confidence region for the fraction of compliers, always-takers, and never-takers. 1 The group size confidence interval incorporates the information that no controls are able to obtain the treatment, which is reflected in the confidence region containing only points with n at = 0, centered over the fraction of compliers (75%) and never-takers (25%). The final result of inference on the quantiles of compliers potential outcomes is shown in Figure 2, which plots pointwise confidence intervals with coverage at least 75 percent for the quantiles of y (0) and y (1) among compliers. The confidence intervals for the quantiles of y (1) are much tighter than for y (0) since there is no non-compliance among the treated units. For each simulated draw, nominal 75-percent confidence intervals are constructed using (a) the randomization inference procedure described in this paper; (b) Frölich and Melly s (2013) instrumental variables quantile treatment effects estimator (IVQTE); and (c) Wald 1 The confidence region is shown as a region of a standard simplex where the indicated group has a fraction ranging from one on the nearest vertex to zero on the opposite side. 12

13 estimators for means: ˆδ = Ê [Y i Z i = 1] Ê [Y i Z i = 0] Ê [D i Z i = 1] Ê [D i Z i = 0] ŷ (0) = Ê [Y i (1 D i ) Z i = 1] Ê [Y i (1 D i ) Z i = 0] Ê [1 D i Z i = 1] Ê [1 D i Z i = 0] ŷ (1) = Ê [Y id i Z i = 1] Ê [Y id i Z i = 0] Ê [D i Z i = 1] Ê [D, i Z i = 0] where ˆδ is an estimator for the average treatment effect among compliers, ŷ (0) and ŷ (1) are estimators for compliers average potential outcomes, and Ê [ Z i = z] denotes the observed mean in the Z i = z cell. For randomization inference and IVQTE, confidence intervals are constructed for the.05-,.5-, and.95-quantiles of y (0) and y (1), and the.05-,.5-, and.95- quantile treatment effects (defined as the difference between corresponding quantiles of the potential outcomes). For the Wald estimators, inference is only done on the mean of y (0), y (1), and y (1) y (0). IVQTE confidence intervals are based on 100 bootstrap iterations, and Wald confidence intervals are based on the delta method and normal approximations. The simulation shows that randomization inference, though conservative, maintains at least the specified coverage for all quantiles, while the other methods suffer from belownominal coverage, bias, or both. Table 1 reports the results from the simulations. The first three columns of Panel A show randomization inference rejection rates range from zero to about 4 percent, with miniscule rejection rates on the quantile treatment effects (which are all zero). Panel B shows that IVQTE, on the other hand, rejects the truth at rates much higher than the nominal size for the tail quantiles, with rejection rates ranging from around 47 percent to 70 percent for the.05- and.95-quantiles. IVQTE rejection rates for the median, however, are conservative relative to the nominal size, ranging from 2 to 11 percent. Columns 4 through 6 of panel B show a downward bias in all of the estimates of the quantiles of potential outcomes, a bias that is especially large for the lower-tail quantiles. The estimates of the 13

14 quantile treatment effects are downward biased for the lower tail, and slightly upward biased for the median treatment effect. The bias of the.95-quantile treatment effect is negligible, as the biases on the marginal quantiles cancel. Panel C shows that the Wald estimator s rejection rates are mostly within the nominal rate (though slightly exceeding nominal for the treatment effect). However, the estimates of the means of potential outcomes are substantially downward biased. While the coverage rates of randomization-based confidence intervals are always at least the specified rate, the power of randomization inference depends on the number of units and the degree of compliance. To illustrate how the performance of the inference procedure depends on the number of units and the compliance rate, I apply the procedure to three simulated experiments using a similar setup to above, but with a positive, constant treatment effect equal to one and with two-sided non-compliance. The first experiment represents a favorable design with a relatively large n = 160 and high compliance rate of about 95 percent. The second experiment has a lower compliance rate of about 50 percent, but the same n. The third experiment has moderate compliance (75 percent) and a smaller n = 50. The intermediate inference on group sizes produces confidence regions that are centered on the truth and tighter for larger n. Figure 6.1 plots percent confidence regions for group sizes in each of the three setups described in the previous paragraph. The large-n, high-compliance example in panel (a) yields a tight confidence region concentrated around very high compliance rates. The group size confidence regions in the large-n, low-compliance example (panel b) and the small-n, moderate compliance example (panel c) are more spread out, but centered on the true compliance rates of 50 and 75 percent. The final confidence intervals for potential outcome quantiles among compliers are tight enough to detect a treatment effect in the large-n, high compliance case, but become substantially wider when the compliance rate is lower or n is smaller. Figure 6.1 plots the confidence intervals for each of the three designs described above. Panel (a) shows that for n = 160 and a compliance rate of 95 percent, the confidence intervals for y (0) and y (1) are 14

15 non-overlapping everywhere but at the lower tail. A test of the null hypothesis of no quantile treatment effects based on these confidence intervals would therefore reject for almost all quantiles (although a Bonferroni-type correction to the significance level would have to be made). However, the confidence intervals for y (0) and y (1) in panels (b) and (c) are overlapping for all the quantiles plotted. 5 Example: College student services and incentives This section applies the procedure to data from the STAR demonstration project, a randomized trial analyzed by Angrist, Lang, and Oreopoulos (2009). The STAR demonstration was carried out at a satellite campus of a large Canadian university, and randomly assigned incoming first-year undergraduates to either control or one of three treatment arms. The first treatment arm gave students access to a peer-advising and study group service known as the Student Support Program (SSP). The second treatment arm gave students the chance to participate in a merit-based scholarship program known as the Student Fellowship Program (SFP) which gave awards ranging from $1,000 to $5,000 for meeting GPA targets. The third treatment arm offered both SSP services and the opportunity to participate in SFP. In this example I focus on this third treatment arm combining SSP and SFP. Students who were assigned to treatment groups were asked to give consent by signing up. Students who failed to sign up and students in the control group were ineligible for the services and programs that were part of the study. While most students who were assigned to treatment signed up, the take up rate was less than 100 percent. The outcomes measured include credits earned, academic standing, and GPA as of the first and second years. In this example I focus on second year GPA. See Angrist, Lang, and Oreopoulos (2009) for more details on the study design and treatments. The setting corresponds closely to the simulations in the previous section, with one-sided compliance where there are never-takers but no always-takers. In this case the assignment 15

16 vector Z is a vector of indicators for being assigned to the SSP+SFP treatment arm (only students assigned to control or the third treatment arm are included in the analysis). The treatment status vector D contains indicators for whether students actually signed up for the treatment if they were assigned. The outcome vector Y contains students second year GPA. The analysis in this example focuses on a subgroup for whom the treatment effect might be expected to have a particularly large effect: women who are first-generation university enrollees (see, e.g., Jacob, 2002 for a study of the higher-education gender gap). Table 2 contains summary statistics on the n = 203 women who are first-generation university enrollees and were assigned to either control or the SSP+SFP treatment arm. The first row of the table shows that out of the 29 subjects assigned to treatment (Z = 1), 23 actually signed up to participate in the programs and 6 did not. The number of controls in this group is 174. The remaining rows in the table show that average second year GPA for all subjects in this group was 1.93 with a standard deviation of.86, but was 2.43 (st.dev.=.73) among subjects assigned to the treatment and only 1.85 (st.dev.=.86) for control. The difference in the mean outcome for treatment and control suggests a positive treatment effect on average, and indeed the p-value (not reported) for a rank sum test of the null hypothesis of no treatment effect for this experiment is Inference on group sizes in this experiment suggests a high compliance rate. Figure 5 plots the confidence region of the fraction of compliers, always-takers, and never-takers, showing that compliers make up between about 70 and 90 percent of the subjects, with the remainder being composed of never-takers. The construction of the confidence region took into account that the experimental design ruled out always-takers. Inference on the quantiles of potential GPAs is consistent with a positive effect that is roughly equal across the distribution for the subgroup of female first-generation university enrollees. Figure 6 plots pointwise confidence intervals for the.05- through.95-quantiles of compliers potential second year GPAs. Over most of the range, the quantiles under treat- 16

17 ment appears to be above the quantiles under no treatment (consistent with a positive effect on outcomes). A test based on comparing confidence intervals would distinguish between the two distributions through the middle-upper quantiles, but not at the tails where the confidence intervals overlap. The figure is consistent with a model that for this group of subjects the treatment shifted the distribution of outcomes, and thus could motivate performing a parametric analysis of a location shift model, which would allow for more powerful inference. 6 Conclusion Traditional large-sample inference methods are known to be problematic in experimental settings with imperfect compliance when the number of units is small, compliance rates are low (i.e., weak instruments), or inference on tail quantiles is of interest. The randomizationbased inference procedure for the distribution or quantiles of potential outcomes described in this paper remains valid even for experiments with few units, low compliance rates, or tail quantiles, and requires no parametric model for the treatment effect. The method can be used as an alternative to large-sample or parametric model-based methods, or as a tool to motivate using those methods. While the method was developed in the context of a randomized controlled experiment, it should be useful for observational studies as well, since instrumental variables research designs with small samples or weak instruments are susceptible to the same challenges as randomized trials with imperfect compliance. If the instrument vector is Z, then after conditioning on Z 1 = m, the standard instrumental variables assumptions in Angrist, Imbens, and Rubin (1996) satisfy the assumptions here, and analysis proceeds exactly as outlined in this paper. Extending this method to more complicated research designs, including multi-valued treatments and assignments, is left for future work. 17

18 Appendix Computing maximum likelihood estimates I am going to switch up how I do exact IV inference on quantiles. Let s start with inference on the cdf. I want a confidence interval for Fc 1 (x). A confidence interval will consist of a random interval [A, B] the minimum coverage of [A, B] over the nuisance parameters (n c, n at, k at (x)) is at least 1 α. So how do I compute the coverage of [A, B] for a given set of nuisance parameters? { A F 1 c (x) B } = { n c A kc 1 (x) n c B } = { n c A kc 1 (x) n c B } = { n c A kc 1 (x) n c B }. Let s express A and B in terms of observed statistics. Define a random variable ˆF 1 c (x) (n m) J (x) mi (x) m (n m) mq (n m) P, and define A ˆF 1 c (x) l, B ˆF 1 c (x) + u, so now l is the fixed distance between the naive estimator and the lower bound of the CI, and u is the fixed distance between the naive estimator and the upper bound of the CI. In 18

19 terms of k 1 c (x), Now back to our event: { nc A k 1 c (x) n c B } = { n c A kc 1 (x) } { kc 1 (x) n c B } { } k 1 = c (x) u n ˆF c 1 (x) l + k1 c (x). c n c So I could just cycle through the entire support of (P, Q, I (x), J (x)), and for each element compute ˆF 1 c (x), see if it s between k1 c (x) n c u and k1 c (x) n c + l, and if it is calculate the probability and add it to the list. The final probability is the coverage for that combination of nuisance parameters. This is great! So I can gradually increment l and u until my minimum coverage reaches 1 α. That will give me a confidence region for F 1 c (x). How to invert that for quantiles? My confidence interval will simply be all the values x such that τ is in the CI for F 1 c (x). Will this have correct coverage? The null that the τ-quantile is x is equivalent to F 1 c (x) = τ, and I can control how often I reject this hypothesis, so I should be good. Now computationally. I could start with some x and get F The maximum likelihood estimate for Q 1 c (τ) is the smallest value ˆx that solves the maximization problem max ( x,n c,n at,k 1 at {( )( )( ) n nat n c kat 1 n at kat 1 P kat 1 I (x) n at kat 1 (Q I (x)) )( n c n c τ m P J (x) (n at kat 1 (Q I (x))) n c τ J (x) (k 1 at I (x)) = max L ( x, n c, n at, kat P, 1 Q, I (x), J (x) ) x,n c,n at,kat 1 )} Computation of the maximum is aided by two features of the objective function. First, the objective function changes only at observed outcome values in the {Z i = 1} cell, so the estimation algorithm need only search over x {Y i } i:zi =1. Second, the other parameters of 19

20 the objective function are integer valued and bounded as follows: n c S 1 {1, n Q P }, n at S 2 (n c ) {max (m P + Q n c, Q),..., min (n n c P, Q + m P )} kat 1 S 3 (n c, n at ) {max (I, J (x) ( n c τ I (x))),..., min (n at (Q I (x)), I (x) + J (x), n c + n at n c τ (Q I (x)) (m P J (x)))}. Finding the solution is straightforward via a sequential optimization algorithm that searches over values x {Y i } i:zi =1, and within each x searches over an integer-valued restricted grid of values for (n c, n at, k 1 at): max x {Y i } i:zi =1 { max L ( x, n c, n at, kat P, 1 Q, I (x), J (x) )}, (n c,n at,kat) G(x) 1 where the grid of values for (n c, n at, k 1 at) is: { {{(nc )} } } G (x) =, n at, kat 1 kat 1 S 3(x,n c,n at). n at S 2 (n c) n c S Choosing critical values for hypothesis tests Critical values for a test of significance level α from the distribution of (P, Q, I (x), J (x)) can be chosen by finding set of upper and lower bounds for each element, (p l, p u, q l, q u, i l, i u, j l, j u ), such that p u q u i u j u p=p l q=q l i=i l j=j l Pr {P = p, Q = q, I (x) = i, J (x) = j} 1 α. Suitable bounds can be found by setting them initially to extremes of the support under the hypothesis being tested, and in turn incrementing each lower bound and decrementing each 20

21 upper bound, cycling through the bounds, and stopping as soon as an additional increment or decrement would reduce the probability mass within the bounds below 1 α. The support of (P, Q, I (x), J (x)) is given by the following: P {max (0, n nt + m n),..., min (m, n nt )}, Q {max (0, n at m),..., min (n m, n at )}, I (x) { max ( 0, Q ( n at k 1 at (x) )),..., min ( Q, k 1 at (x) )}, J (x) { j min (P, Q),..., min { k 1 c (x) + k 1 at (x), m P ( n at k 1 at (x) ) + Q }}, j min (p, q) = max { k 1 at (x) min { k 1 at (x), q }, m p n at + k 1 at (x) + q min { k 1 at (x), q } ( n c k 1 c (x) ) A test based on critical values chosen this way rejects if any element of the observed (P, Q, I (x), J (x)) lies strictly outside of the bounds. References Angrist, Joshua, Daniel Lang, and Philip Oreopoulos (2009): Incentives and Services for College Achievement: Evidence from a Randomized Trial, American Economic Journal: Applied Economics, 1, Angrist, Joshua D. (1990): Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social Security Administrative Records, American Economic Review, 80, Angrist, Joshua D., Guido Imbens, and Donald B. Rubin (1996): Identification of Causal Effects Using Instrumental Variables, Journal of the American Statistical As- 21

22 sociation, 91, Ashenfelter, Orley A. (1978): Estimating the Effect of Training Programs on Earnings, Review of Economics and Statistics, 60, Bound, John, David Jaeger, and Regina Baker (1995): Problems with Instrumental Variables Estimation when the Correlation between the Instruments and the Endogenous Variables is Weak, Journal of American Statistical Association, 90, Chernozhukov, Victor (2005): Extremal Quantile Regression, The Annals of Statistics, 33, Chernozhukov, Victor, and Iván Fernández-Val (2011): Inference for extremal conditional quantile models, with an application to market and birthweight risks, The Review of Economic Studies, 78, Chernozhukov, Victor, and Christian Hansen (2005): An IV Model of Quantile Treatment Effects, Econometrica, 73, Chernozhukov, Victor, Christian Hansen, and Michael Jansson (2009): Finite sample inference for quantile regression models, Journal of Econometrics, 152, Frölich, Markus, and Blaise Melly (2013): Unconditional Quantile Treatment Effects Under Endogeneity, Journal of Business & Economic Statistics, 31, Greevy, Robert, Jeffrey H. Silber, Avital Cnaan, and Paul R. Rosenbaum (2004): Randomization Inference with Imperfect Compliance in the ACE-Inhibitor after Anthracycline Randomized Trial, Journal of the American Statistical Association, 99, pp Hahn, Jinyong, Petra Todd, and Wilbert van der Klaauw (2001): Identification and Estimation of Treatment Effects with a Regression-Discontinuity Design, Econometrica, 69,

23 Huber, Martin, and Giovanni Mellace (2011): Testing instrument validity for LATE identification based on inequality moment constraints, Economics Working Paper Series 1143, University of St. Gallen, School of Economics and Political Science. Imbens, Guido W., and Joshua D. Angrist (1994): Identification and Estimation of Local Average Treatment Effects, Econometrica, 62, Imbens, Guido W., and Paul R. Rosenbaum (2005): Robust, Accurate Confidence Intervals with a Weak Instrument: Quarter of Birth and Education, Journal of the Royal Statistical Society. Series A (Statistics in Society), 168, pp Imbens, Guido W., and Donald B. Rubin (1997): Estimating Outcome Distributions for Compliers in Instrumental Variables Models, The Review of Economic Studies, 64, Jacob, Brian A. (2002): Where the boys aren t: non-cognitive skills, returns to school and the gender gap in higher education, Economics of Education Review, 21, Koenker, Roger (2005): Quantile Regression. Cambridge University Press. Koenker, Roger, and Jr. Bassett, Gilbert (1978): Regression Quantiles, Econometrica, 46, Rosenbaum, Paul R. (2002): Observational Studies, 2nd edn. Springer-Verlag, New York. 23

24 Group size CR, n=30 C AT NT Figure 1: percent confidence region for the fraction of compliers, always-takers, and never-takers, n = level CI for quantiles of potential outcomes quantile y0 y1 Figure 2: 75 percent confidence region for the quantiles of compliers potential outcomes, n=30 24

25 Table 1: Simulated rejection rates Sheet1_2 and mean (3) estimates Rejection rates Mean bias.05 quantile median/mean.95 quantile.05 quantile median/mean.95 quantile A. Randomization inference y(0) y(1) y(1) - y(0) B. IV quantile treatment effects y(0) y(1) y(1) - y(0) C. Wald y(0) y(1) y(1) - y(0) Notes: Rejection rates and mean estimates from 5,000 simulated experiments with n = 50 using the inference methods in each panel headings. The nominal size of the tests is.25. The true treatment effect is zero for all quantiles and the mean. Table 2: Summary statistics by assignment and treatment status Overall Treatment Control All Signed up Page 1Not signed up Count E[Y] std(y) Notes: Counts, means, and standard deviations of 2nd year GPA for the subgroup of women who are first-generation university enrollees (neither parent attended a university). 25

26 Group size CR, n=160 C AT (a) n=160,pr(c)=.95 NT Group size CR, n=160 C Group size CR, n=50 C AT NT AT NT (b) n=160, Pr(C)=.5 (c) n=50,pr(c)=.75 Figure 3: percent confidence region for the fraction of compliers, always-takers, and never-takers 26

27 .75 level CI for quantiles of potential outcomes quantile y0 y1 (a) n=160,pr(c)= level CI for quantiles of potential outcomes.75 level CI for quantiles of potential outcomes quantile quantile y0 y1 y0 y1 (b) n=160,pr(c)=.5 (c) n=50,pr(c)=.75 Figure 4: 75 percent confidence intervals for the quantiles of y (0) and y (1) among compliers. 27

28 level CR for Group Sizes, n = 203 C AT NT Figure 5: percent confidence region for the fraction of always-takers, never-takers, and compliers among female first-generation university enrollees in the STAR demonstration experiment 28

29 Quantiles of Potential GPA quantile GPA(0) GPA(1) Figure 6: 75 percent confidence intervals for the quantiles of compliers potential second year GPAs among female first-generation university enrollees in the STAR demonstration experiment. 29

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION VICTOR CHERNOZHUKOV CHRISTIAN HANSEN MICHAEL JANSSON Abstract. We consider asymptotic and finite-sample confidence bounds in instrumental

More information

Sensitivity checks for the local average treatment effect

Sensitivity checks for the local average treatment effect Sensitivity checks for the local average treatment effect Martin Huber March 13, 2014 University of St. Gallen, Dept. of Economics Abstract: The nonparametric identification of the local average treatment

More information

Flexible Estimation of Treatment Effect Parameters

Flexible Estimation of Treatment Effect Parameters Flexible Estimation of Treatment Effect Parameters Thomas MaCurdy a and Xiaohong Chen b and Han Hong c Introduction Many empirical studies of program evaluations are complicated by the presence of both

More information

Econometrics of causal inference. Throughout, we consider the simplest case of a linear outcome equation, and homogeneous

Econometrics of causal inference. Throughout, we consider the simplest case of a linear outcome equation, and homogeneous Econometrics of causal inference Throughout, we consider the simplest case of a linear outcome equation, and homogeneous effects: y = βx + ɛ (1) where y is some outcome, x is an explanatory variable, and

More information

Weak Stochastic Increasingness, Rank Exchangeability, and Partial Identification of The Distribution of Treatment Effects

Weak Stochastic Increasingness, Rank Exchangeability, and Partial Identification of The Distribution of Treatment Effects Weak Stochastic Increasingness, Rank Exchangeability, and Partial Identification of The Distribution of Treatment Effects Brigham R. Frandsen Lars J. Lefgren December 16, 2015 Abstract This article develops

More information

Supplementary material to: Tolerating deance? Local average treatment eects without monotonicity.

Supplementary material to: Tolerating deance? Local average treatment eects without monotonicity. Supplementary material to: Tolerating deance? Local average treatment eects without monotonicity. Clément de Chaisemartin September 1, 2016 Abstract This paper gathers the supplementary material to de

More information

Introduction to causal identification. Nidhiya Menon IGC Summer School, New Delhi, July 2015

Introduction to causal identification. Nidhiya Menon IGC Summer School, New Delhi, July 2015 Introduction to causal identification Nidhiya Menon IGC Summer School, New Delhi, July 2015 Outline 1. Micro-empirical methods 2. Rubin causal model 3. More on Instrumental Variables (IV) Estimating causal

More information

An Alternative Assumption to Identify LATE in Regression Discontinuity Design

An Alternative Assumption to Identify LATE in Regression Discontinuity Design An Alternative Assumption to Identify LATE in Regression Discontinuity Design Yingying Dong University of California Irvine May 2014 Abstract One key assumption Imbens and Angrist (1994) use to identify

More information

Imbens, Lecture Notes 2, Local Average Treatment Effects, IEN, Miami, Oct 10 1

Imbens, Lecture Notes 2, Local Average Treatment Effects, IEN, Miami, Oct 10 1 Imbens, Lecture Notes 2, Local Average Treatment Effects, IEN, Miami, Oct 10 1 Lectures on Evaluation Methods Guido Imbens Impact Evaluation Network October 2010, Miami Methods for Estimating Treatment

More information

An Alternative Assumption to Identify LATE in Regression Discontinuity Designs

An Alternative Assumption to Identify LATE in Regression Discontinuity Designs An Alternative Assumption to Identify LATE in Regression Discontinuity Designs Yingying Dong University of California Irvine September 2014 Abstract One key assumption Imbens and Angrist (1994) use to

More information

Recitation Notes 6. Konrad Menzel. October 22, 2006

Recitation Notes 6. Konrad Menzel. October 22, 2006 Recitation Notes 6 Konrad Menzel October, 006 Random Coefficient Models. Motivation In the empirical literature on education and earnings, the main object of interest is the human capital earnings function

More information

Identification Analysis for Randomized Experiments with Noncompliance and Truncation-by-Death

Identification Analysis for Randomized Experiments with Noncompliance and Truncation-by-Death Identification Analysis for Randomized Experiments with Noncompliance and Truncation-by-Death Kosuke Imai First Draft: January 19, 2007 This Draft: August 24, 2007 Abstract Zhang and Rubin 2003) derives

More information

What s New in Econometrics? Lecture 14 Quantile Methods

What s New in Econometrics? Lecture 14 Quantile Methods What s New in Econometrics? Lecture 14 Quantile Methods Jeff Wooldridge NBER Summer Institute, 2007 1. Reminders About Means, Medians, and Quantiles 2. Some Useful Asymptotic Results 3. Quantile Regression

More information

Michael Lechner Causal Analysis RDD 2014 page 1. Lecture 7. The Regression Discontinuity Design. RDD fuzzy and sharp

Michael Lechner Causal Analysis RDD 2014 page 1. Lecture 7. The Regression Discontinuity Design. RDD fuzzy and sharp page 1 Lecture 7 The Regression Discontinuity Design fuzzy and sharp page 2 Regression Discontinuity Design () Introduction (1) The design is a quasi-experimental design with the defining characteristic

More information

150C Causal Inference

150C Causal Inference 150C Causal Inference Instrumental Variables: Modern Perspective with Heterogeneous Treatment Effects Jonathan Mummolo May 22, 2017 Jonathan Mummolo 150C Causal Inference May 22, 2017 1 / 26 Two Views

More information

Impact Evaluation Technical Workshop:

Impact Evaluation Technical Workshop: Impact Evaluation Technical Workshop: Asian Development Bank Sept 1 3, 2014 Manila, Philippines Session 19(b) Quantile Treatment Effects I. Quantile Treatment Effects Most of the evaluation literature

More information

Testing instrument validity for LATE identification based on inequality moment constraints

Testing instrument validity for LATE identification based on inequality moment constraints Testing instrument validity for LATE identification based on inequality moment constraints Martin Huber* and Giovanni Mellace** *Harvard University, Dept. of Economics and University of St. Gallen, Dept.

More information

A Test for Rank Similarity and Partial Identification of the Distribution of Treatment Effects Preliminary and incomplete

A Test for Rank Similarity and Partial Identification of the Distribution of Treatment Effects Preliminary and incomplete A Test for Rank Similarity and Partial Identification of the Distribution of Treatment Effects Preliminary and incomplete Brigham R. Frandsen Lars J. Lefgren August 1, 2015 Abstract We introduce a test

More information

A Course in Applied Econometrics. Lecture 5. Instrumental Variables with Treatment Effect. Heterogeneity: Local Average Treatment Effects.

A Course in Applied Econometrics. Lecture 5. Instrumental Variables with Treatment Effect. Heterogeneity: Local Average Treatment Effects. A Course in Applied Econometrics Lecture 5 Outline. Introduction 2. Basics Instrumental Variables with Treatment Effect Heterogeneity: Local Average Treatment Effects 3. Local Average Treatment Effects

More information

The Econometric Evaluation of Policy Design: Part I: Heterogeneity in Program Impacts, Modeling Self-Selection, and Parameters of Interest

The Econometric Evaluation of Policy Design: Part I: Heterogeneity in Program Impacts, Modeling Self-Selection, and Parameters of Interest The Econometric Evaluation of Policy Design: Part I: Heterogeneity in Program Impacts, Modeling Self-Selection, and Parameters of Interest Edward Vytlacil, Yale University Renmin University, Department

More information

Identi cation of Positive Treatment E ects in. Randomized Experiments with Non-Compliance

Identi cation of Positive Treatment E ects in. Randomized Experiments with Non-Compliance Identi cation of Positive Treatment E ects in Randomized Experiments with Non-Compliance Aleksey Tetenov y February 18, 2012 Abstract I derive sharp nonparametric lower bounds on some parameters of the

More information

The Problem of Causality in the Analysis of Educational Choices and Labor Market Outcomes Slides for Lectures

The Problem of Causality in the Analysis of Educational Choices and Labor Market Outcomes Slides for Lectures The Problem of Causality in the Analysis of Educational Choices and Labor Market Outcomes Slides for Lectures Andrea Ichino (European University Institute and CEPR) February 28, 2006 Abstract This course

More information

What s New in Econometrics. Lecture 1

What s New in Econometrics. Lecture 1 What s New in Econometrics Lecture 1 Estimation of Average Treatment Effects Under Unconfoundedness Guido Imbens NBER Summer Institute, 2007 Outline 1. Introduction 2. Potential Outcomes 3. Estimands and

More information

A Test for Rank Similarity and Partial Identification of the Distribution of Treatment Effects Preliminary and incomplete

A Test for Rank Similarity and Partial Identification of the Distribution of Treatment Effects Preliminary and incomplete A Test for Rank Similarity and Partial Identification of the Distribution of Treatment Effects Preliminary and incomplete Brigham R. Frandsen Lars J. Lefgren April 30, 2015 Abstract We introduce a test

More information

Program Evaluation with High-Dimensional Data

Program Evaluation with High-Dimensional Data Program Evaluation with High-Dimensional Data Alexandre Belloni Duke Victor Chernozhukov MIT Iván Fernández-Val BU Christian Hansen Booth ESWC 215 August 17, 215 Introduction Goal is to perform inference

More information

WORKING P A P E R. Unconditional Quantile Treatment Effects in the Presence of Covariates DAVID POWELL WR-816. December 2010

WORKING P A P E R. Unconditional Quantile Treatment Effects in the Presence of Covariates DAVID POWELL WR-816. December 2010 WORKING P A P E R Unconditional Quantile Treatment Effects in the Presence of Covariates DAVID POWELL WR-816 December 2010 This paper series made possible by the NIA funded RAND Center for the Study of

More information

Testing Rank Similarity

Testing Rank Similarity Testing Rank Similarity Brigham R. Frandsen Lars J. Lefgren November 13, 2015 Abstract We introduce a test of the rank invariance or rank similarity assumption common in treatment effects and instrumental

More information

Econ 2148, fall 2017 Instrumental variables I, origins and binary treatment case

Econ 2148, fall 2017 Instrumental variables I, origins and binary treatment case Econ 2148, fall 2017 Instrumental variables I, origins and binary treatment case Maximilian Kasy Department of Economics, Harvard University 1 / 40 Agenda instrumental variables part I Origins of instrumental

More information

What s New in Econometrics. Lecture 13

What s New in Econometrics. Lecture 13 What s New in Econometrics Lecture 13 Weak Instruments and Many Instruments Guido Imbens NBER Summer Institute, 2007 Outline 1. Introduction 2. Motivation 3. Weak Instruments 4. Many Weak) Instruments

More information

Estimation of the Conditional Variance in Paired Experiments

Estimation of the Conditional Variance in Paired Experiments Estimation of the Conditional Variance in Paired Experiments Alberto Abadie & Guido W. Imbens Harvard University and BER June 008 Abstract In paired randomized experiments units are grouped in pairs, often

More information

AGEC 661 Note Fourteen

AGEC 661 Note Fourteen AGEC 661 Note Fourteen Ximing Wu 1 Selection bias 1.1 Heckman s two-step model Consider the model in Heckman (1979) Y i = X iβ + ε i, D i = I {Z iγ + η i > 0}. For a random sample from the population,

More information

Online Appendix to Yes, But What s the Mechanism? (Don t Expect an Easy Answer) John G. Bullock, Donald P. Green, and Shang E. Ha

Online Appendix to Yes, But What s the Mechanism? (Don t Expect an Easy Answer) John G. Bullock, Donald P. Green, and Shang E. Ha Online Appendix to Yes, But What s the Mechanism? (Don t Expect an Easy Answer) John G. Bullock, Donald P. Green, and Shang E. Ha January 18, 2010 A2 This appendix has six parts: 1. Proof that ab = c d

More information

New Developments in Econometrics Lecture 16: Quantile Estimation

New Developments in Econometrics Lecture 16: Quantile Estimation New Developments in Econometrics Lecture 16: Quantile Estimation Jeff Wooldridge Cemmap Lectures, UCL, June 2009 1. Review of Means, Medians, and Quantiles 2. Some Useful Asymptotic Results 3. Quantile

More information

On Variance Estimation for 2SLS When Instruments Identify Different LATEs

On Variance Estimation for 2SLS When Instruments Identify Different LATEs On Variance Estimation for 2SLS When Instruments Identify Different LATEs Seojeong Lee June 30, 2014 Abstract Under treatment effect heterogeneity, an instrument identifies the instrumentspecific local

More information

Sharp Bounds on Causal Effects under Sample Selection*

Sharp Bounds on Causal Effects under Sample Selection* OXFORD BULLETIN OF ECONOMICS AND STATISTICS, 77, 1 (2015) 0305 9049 doi: 10.1111/obes.12056 Sharp Bounds on Causal Effects under Sample Selection* Martin Huber and Giovanni Mellace Department of Economics,

More information

Instrumental Variables

Instrumental Variables Instrumental Variables Teppei Yamamoto Keio University Introduction to Causal Inference Spring 2016 Noncompliance in Randomized Experiments Often we cannot force subjects to take specific treatments Units

More information

Testing instrument validity for LATE identification based on inequality moment constraints

Testing instrument validity for LATE identification based on inequality moment constraints Testing instrument validity for LATE identification based on inequality moment constraints Martin Huber and Giovanni Mellace University of St. Gallen, Dept. of Economics Abstract: This paper proposes bootstrap

More information

A nonparametric test for path dependence in discrete panel data

A nonparametric test for path dependence in discrete panel data A nonparametric test for path dependence in discrete panel data Maximilian Kasy Department of Economics, University of California - Los Angeles, 8283 Bunche Hall, Mail Stop: 147703, Los Angeles, CA 90095,

More information

Instrumental Variables in Action: Sometimes You get What You Need

Instrumental Variables in Action: Sometimes You get What You Need Instrumental Variables in Action: Sometimes You get What You Need Joshua D. Angrist MIT and NBER May 2011 Introduction Our Causal Framework A dummy causal variable of interest, i, is called a treatment,

More information

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data?

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data? When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data? Kosuke Imai Department of Politics Center for Statistics and Machine Learning Princeton University Joint

More information

The changes-in-changes model with covariates

The changes-in-changes model with covariates The changes-in-changes model with covariates Blaise Melly, Giulia Santangelo Bern University, European Commission - JRC First version: January 2015 Last changes: January 23, 2015 Preliminary! Abstract:

More information

Testing for Rank Invariance or Similarity in Program Evaluation: The Effect of Training on Earnings Revisited

Testing for Rank Invariance or Similarity in Program Evaluation: The Effect of Training on Earnings Revisited Testing for Rank Invariance or Similarity in Program Evaluation: The Effect of Training on Earnings Revisited Yingying Dong and Shu Shen UC Irvine and UC Davis Sept 2015 @ Chicago 1 / 37 Dong, Shen Testing

More information

Causal Inference with General Treatment Regimes: Generalizing the Propensity Score

Causal Inference with General Treatment Regimes: Generalizing the Propensity Score Causal Inference with General Treatment Regimes: Generalizing the Propensity Score David van Dyk Department of Statistics, University of California, Irvine vandyk@stat.harvard.edu Joint work with Kosuke

More information

Jinyong Hahn. Department of Economics Tel: (310) Bunche Hall Fax: (310) Professional Positions

Jinyong Hahn. Department of Economics Tel: (310) Bunche Hall Fax: (310) Professional Positions Jinyong Hahn Department of Economics Tel: (310) 825-2523 8283 Bunche Hall Fax: (310) 825-9528 Mail Stop: 147703 E-mail: hahn@econ.ucla.edu Los Angeles, CA 90095 Education Harvard University, Ph.D. Economics,

More information

Potential Outcomes and Causal Inference I

Potential Outcomes and Causal Inference I Potential Outcomes and Causal Inference I Jonathan Wand Polisci 350C Stanford University May 3, 2006 Example A: Get-out-the-Vote (GOTV) Question: Is it possible to increase the likelihood of an individuals

More information

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be Quantile methods Class Notes Manuel Arellano December 1, 2009 1 Unconditional quantiles Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be Q τ (Y ) q τ F 1 (τ) =inf{r : F

More information

leebounds: Lee s (2009) treatment effects bounds for non-random sample selection for Stata

leebounds: Lee s (2009) treatment effects bounds for non-random sample selection for Stata leebounds: Lee s (2009) treatment effects bounds for non-random sample selection for Stata Harald Tauchmann (RWI & CINCH) Rheinisch-Westfälisches Institut für Wirtschaftsforschung (RWI) & CINCH Health

More information

Applied Microeconometrics. Maximilian Kasy

Applied Microeconometrics. Maximilian Kasy Applied Microeconometrics Maximilian Kasy 7) Distributional Effects, quantile regression (cf. Mostly Harmless Econometrics, chapter 7) Sir Francis Galton (Natural Inheritance, 1889): It is difficult to

More information

Independent and conditionally independent counterfactual distributions

Independent and conditionally independent counterfactual distributions Independent and conditionally independent counterfactual distributions Marcin Wolski European Investment Bank M.Wolski@eib.org Society for Nonlinear Dynamics and Econometrics Tokyo March 19, 2018 Views

More information

Testing for Rank Invariance or Similarity in Program Evaluation

Testing for Rank Invariance or Similarity in Program Evaluation Testing for Rank Invariance or Similarity in Program Evaluation Yingying Dong University of California, Irvine Shu Shen University of California, Davis First version, February 2015; this version, October

More information

The problem of causality in microeconometrics.

The problem of causality in microeconometrics. The problem of causality in microeconometrics. Andrea Ichino University of Bologna and Cepr June 11, 2007 Contents 1 The Problem of Causality 1 1.1 A formal framework to think about causality....................................

More information

IV Estimation WS 2014/15 SS Alexander Spermann. IV Estimation

IV Estimation WS 2014/15 SS Alexander Spermann. IV Estimation SS 2010 WS 2014/15 Alexander Spermann Evaluation With Non-Experimental Approaches Selection on Unobservables Natural Experiment (exogenous variation in a variable) DiD Example: Card/Krueger (1994) Minimum

More information

Principles Underlying Evaluation Estimators

Principles Underlying Evaluation Estimators The Principles Underlying Evaluation Estimators James J. University of Chicago Econ 350, Winter 2019 The Basic Principles Underlying the Identification of the Main Econometric Evaluation Estimators Two

More information

Testing for Rank Invariance or Similarity in Program Evaluation: The Effect of Training on Earnings Revisited

Testing for Rank Invariance or Similarity in Program Evaluation: The Effect of Training on Earnings Revisited Testing for Rank Invariance or Similarity in Program Evaluation: The Effect of Training on Earnings Revisited Yingying Dong University of California, Irvine Shu Shen University of California, Davis First

More information

Instrumental Variables

Instrumental Variables Instrumental Variables Kosuke Imai Harvard University STAT186/GOV2002 CAUSAL INFERENCE Fall 2018 Kosuke Imai (Harvard) Noncompliance in Experiments Stat186/Gov2002 Fall 2018 1 / 18 Instrumental Variables

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Estimating the Dynamic Effects of a Job Training Program with M. Program with Multiple Alternatives

Estimating the Dynamic Effects of a Job Training Program with M. Program with Multiple Alternatives Estimating the Dynamic Effects of a Job Training Program with Multiple Alternatives Kai Liu 1, Antonio Dalla-Zuanna 2 1 University of Cambridge 2 Norwegian School of Economics June 19, 2018 Introduction

More information

Recitation Notes 5. Konrad Menzel. October 13, 2006

Recitation Notes 5. Konrad Menzel. October 13, 2006 ecitation otes 5 Konrad Menzel October 13, 2006 1 Instrumental Variables (continued) 11 Omitted Variables and the Wald Estimator Consider a Wald estimator for the Angrist (1991) approach to estimating

More information

Small-sample cluster-robust variance estimators for two-stage least squares models

Small-sample cluster-robust variance estimators for two-stage least squares models Small-sample cluster-robust variance estimators for two-stage least squares models ames E. Pustejovsky The University of Texas at Austin Context In randomized field trials of educational interventions,

More information

IsoLATEing: Identifying Heterogeneous Effects of Multiple Treatments

IsoLATEing: Identifying Heterogeneous Effects of Multiple Treatments IsoLATEing: Identifying Heterogeneous Effects of Multiple Treatments Peter Hull December 2014 PRELIMINARY: Please do not cite or distribute without permission. Please see www.mit.edu/~hull/research.html

More information

Causal Inference Lecture Notes: Causal Inference with Repeated Measures in Observational Studies

Causal Inference Lecture Notes: Causal Inference with Repeated Measures in Observational Studies Causal Inference Lecture Notes: Causal Inference with Repeated Measures in Observational Studies Kosuke Imai Department of Politics Princeton University November 13, 2013 So far, we have essentially assumed

More information

Discussion of Identifiability and Estimation of Causal Effects in Randomized. Trials with Noncompliance and Completely Non-ignorable Missing Data

Discussion of Identifiability and Estimation of Causal Effects in Randomized. Trials with Noncompliance and Completely Non-ignorable Missing Data Biometrics 000, 000 000 DOI: 000 000 0000 Discussion of Identifiability and Estimation of Causal Effects in Randomized Trials with Noncompliance and Completely Non-ignorable Missing Data Dylan S. Small

More information

Econometrics -- Final Exam (Sample)

Econometrics -- Final Exam (Sample) Econometrics -- Final Exam (Sample) 1) The sample regression line estimated by OLS A) has an intercept that is equal to zero. B) is the same as the population regression line. C) cannot have negative and

More information

1 Motivation for Instrumental Variable (IV) Regression

1 Motivation for Instrumental Variable (IV) Regression ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data

More information

Partial Identification of Average Treatment Effects in Program Evaluation: Theory and Applications

Partial Identification of Average Treatment Effects in Program Evaluation: Theory and Applications University of Miami Scholarly Repository Open Access Dissertations Electronic Theses and Dissertations 2013-07-11 Partial Identification of Average Treatment Effects in Program Evaluation: Theory and Applications

More information

Methods to Estimate Causal Effects Theory and Applications. Prof. Dr. Sascha O. Becker U Stirling, Ifo, CESifo and IZA

Methods to Estimate Causal Effects Theory and Applications. Prof. Dr. Sascha O. Becker U Stirling, Ifo, CESifo and IZA Methods to Estimate Causal Effects Theory and Applications Prof. Dr. Sascha O. Becker U Stirling, Ifo, CESifo and IZA last update: 21 August 2009 Preliminaries Address Prof. Dr. Sascha O. Becker Stirling

More information

Sharp IV bounds on average treatment effects on the treated and other populations under endogeneity and noncompliance

Sharp IV bounds on average treatment effects on the treated and other populations under endogeneity and noncompliance Sharp IV bounds on average treatment effects on the treated and other populations under endogeneity and noncompliance Martin Huber 1, Lukas Laffers 2, and Giovanni Mellace 3 1 University of Fribourg, Dept.

More information

Supplemental Appendix to "Alternative Assumptions to Identify LATE in Fuzzy Regression Discontinuity Designs"

Supplemental Appendix to Alternative Assumptions to Identify LATE in Fuzzy Regression Discontinuity Designs Supplemental Appendix to "Alternative Assumptions to Identify LATE in Fuzzy Regression Discontinuity Designs" Yingying Dong University of California Irvine February 2018 Abstract This document provides

More information

arxiv: v1 [stat.me] 8 Jun 2016

arxiv: v1 [stat.me] 8 Jun 2016 Principal Score Methods: Assumptions and Extensions Avi Feller UC Berkeley Fabrizia Mealli Università di Firenze Luke Miratrix Harvard GSE arxiv:1606.02682v1 [stat.me] 8 Jun 2016 June 9, 2016 Abstract

More information

Chapter 8. Quantile Regression and Quantile Treatment Effects

Chapter 8. Quantile Regression and Quantile Treatment Effects Chapter 8. Quantile Regression and Quantile Treatment Effects By Joan Llull Quantitative & Statistical Methods II Barcelona GSE. Winter 2018 I. Introduction A. Motivation As in most of the economics literature,

More information

Predicting the Treatment Status

Predicting the Treatment Status Predicting the Treatment Status Nikolay Doudchenko 1 Introduction Many studies in social sciences deal with treatment effect models. 1 Usually there is a treatment variable which determines whether a particular

More information

EMERGING MARKETS - Lecture 2: Methodology refresher

EMERGING MARKETS - Lecture 2: Methodology refresher EMERGING MARKETS - Lecture 2: Methodology refresher Maria Perrotta April 4, 2013 SITE http://www.hhs.se/site/pages/default.aspx My contact: maria.perrotta@hhs.se Aim of this class There are many different

More information

Statistical Models for Causal Analysis

Statistical Models for Causal Analysis Statistical Models for Causal Analysis Teppei Yamamoto Keio University Introduction to Causal Inference Spring 2016 Three Modes of Statistical Inference 1. Descriptive Inference: summarizing and exploring

More information

Research Note: A more powerful test statistic for reasoning about interference between units

Research Note: A more powerful test statistic for reasoning about interference between units Research Note: A more powerful test statistic for reasoning about interference between units Jake Bowers Mark Fredrickson Peter M. Aronow August 26, 2015 Abstract Bowers, Fredrickson and Panagopoulos (2012)

More information

Comments on: Panel Data Analysis Advantages and Challenges. Manuel Arellano CEMFI, Madrid November 2006

Comments on: Panel Data Analysis Advantages and Challenges. Manuel Arellano CEMFI, Madrid November 2006 Comments on: Panel Data Analysis Advantages and Challenges Manuel Arellano CEMFI, Madrid November 2006 This paper provides an impressive, yet compact and easily accessible review of the econometric literature

More information

Statistical Analysis of Randomized Experiments with Nonignorable Missing Binary Outcomes

Statistical Analysis of Randomized Experiments with Nonignorable Missing Binary Outcomes Statistical Analysis of Randomized Experiments with Nonignorable Missing Binary Outcomes Kosuke Imai Department of Politics Princeton University July 31 2007 Kosuke Imai (Princeton University) Nonignorable

More information

Online Appendix for Targeting Policies: Multiple Testing and Distributional Treatment Effects

Online Appendix for Targeting Policies: Multiple Testing and Distributional Treatment Effects Online Appendix for Targeting Policies: Multiple Testing and Distributional Treatment Effects Steven F Lehrer Queen s University, NYU Shanghai, and NBER R Vincent Pohl University of Georgia November 2016

More information

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix)

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix) Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix Flavio Cunha The University of Pennsylvania James Heckman The University

More information

WORKSHOP ON PRINCIPAL STRATIFICATION STANFORD UNIVERSITY, Luke W. Miratrix (Harvard University) Lindsay C. Page (University of Pittsburgh)

WORKSHOP ON PRINCIPAL STRATIFICATION STANFORD UNIVERSITY, Luke W. Miratrix (Harvard University) Lindsay C. Page (University of Pittsburgh) WORKSHOP ON PRINCIPAL STRATIFICATION STANFORD UNIVERSITY, 2016 Luke W. Miratrix (Harvard University) Lindsay C. Page (University of Pittsburgh) Our team! 2 Avi Feller (Berkeley) Jane Furey (Abt Associates)

More information

Quantitative Economics for the Evaluation of the European Policy

Quantitative Economics for the Evaluation of the European Policy Quantitative Economics for the Evaluation of the European Policy Dipartimento di Economia e Management Irene Brunetti Davide Fiaschi Angela Parenti 1 25th of September, 2017 1 ireneb@ec.unipi.it, davide.fiaschi@unipi.it,

More information

Econometrics of Policy Evaluation (Geneva summer school)

Econometrics of Policy Evaluation (Geneva summer school) Michael Lechner, Slide 1 Econometrics of Policy Evaluation (Geneva summer school) Michael Lechner Swiss Institute for Empirical Economic Research (SEW) University of St. Gallen Switzerland June 2016 Overview

More information

Randomization Inference with An Instrumental Variable: Two Examples and Some Theory

Randomization Inference with An Instrumental Variable: Two Examples and Some Theory Randomization Inference with An Instrumental Variable: Two Examples and Some Theory Paul R. Rosenbaum, Department of Statistics, Wharton School University of Pennsylvania, Philadelphia, PA 19104-6340 US

More information

ECONOMETRICS II (ECO 2401) Victor Aguirregabiria. Spring 2018 TOPIC 4: INTRODUCTION TO THE EVALUATION OF TREATMENT EFFECTS

ECONOMETRICS II (ECO 2401) Victor Aguirregabiria. Spring 2018 TOPIC 4: INTRODUCTION TO THE EVALUATION OF TREATMENT EFFECTS ECONOMETRICS II (ECO 2401) Victor Aguirregabiria Spring 2018 TOPIC 4: INTRODUCTION TO THE EVALUATION OF TREATMENT EFFECTS 1. Introduction and Notation 2. Randomized treatment 3. Conditional independence

More information

Empirical approaches in public economics

Empirical approaches in public economics Empirical approaches in public economics ECON4624 Empirical Public Economics Fall 2016 Gaute Torsvik Outline for today The canonical problem Basic concepts of causal inference Randomized experiments Non-experimental

More information

Simulation-based robust IV inference for lifetime data

Simulation-based robust IV inference for lifetime data Simulation-based robust IV inference for lifetime data Anand Acharya 1 Lynda Khalaf 1 Marcel Voia 1 Myra Yazbeck 2 David Wensley 3 1 Department of Economics Carleton University 2 Department of Economics

More information

University of Toronto Department of Economics. Testing Local Average Treatment Effect Assumptions

University of Toronto Department of Economics. Testing Local Average Treatment Effect Assumptions University of Toronto Department of Economics Working Paper 514 Testing Local Average Treatment Effect Assumptions By Ismael Mourifie and Yuanyuan Wan July 7, 214 TESTING LATE ASSUMPTIONS ISMAEL MOURIFIÉ

More information

Groupe de lecture. Instrumental Variables Estimates of the Effect of Subsidized Training on the Quantiles of Trainee Earnings. Abadie, Angrist, Imbens

Groupe de lecture. Instrumental Variables Estimates of the Effect of Subsidized Training on the Quantiles of Trainee Earnings. Abadie, Angrist, Imbens Groupe de lecture Instrumental Variables Estimates of the Effect of Subsidized Training on the Quantiles of Trainee Earnings Abadie, Angrist, Imbens Econometrica (2002) 02 décembre 2010 Objectives Using

More information

IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade

IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade Denis Chetverikov Brad Larsen Christopher Palmer UCLA, Stanford and NBER, UC Berkeley September

More information

Differences-in-differences, differences of quantiles and quantiles of differences

Differences-in-differences, differences of quantiles and quantiles of differences Differences-in-differences, differences of quantiles and quantiles of differences Franco Peracchi October 13, 2006 1 A MOTIVATING EXAMPLE 1 A motivating example Economists have largely debated on the causes

More information

Causal Inference with Big Data Sets

Causal Inference with Big Data Sets Causal Inference with Big Data Sets Marcelo Coca Perraillon University of Colorado AMC November 2016 1 / 1 Outlone Outline Big data Causal inference in economics and statistics Regression discontinuity

More information

Bounds on Causal Effects in Three-Arm Trials with Non-compliance. Jing Cheng Dylan Small

Bounds on Causal Effects in Three-Arm Trials with Non-compliance. Jing Cheng Dylan Small Bounds on Causal Effects in Three-Arm Trials with Non-compliance Jing Cheng Dylan Small Department of Biostatistics and Department of Statistics University of Pennsylvania June 20, 2005 A Three-Arm Randomized

More information

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions Journal of Modern Applied Statistical Methods Volume 8 Issue 1 Article 13 5-1-2009 Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error

More information

Instrumental Variables in Action

Instrumental Variables in Action Instrumental Variables in Action Remarks in honor of P.G. Wright s 150th birthday Joshua D. Angrist MIT and NBER October 2011 What is Econometrics Anyway? What s the difference between statistics and econometrics?

More information

Potential Outcomes Model (POM)

Potential Outcomes Model (POM) Potential Outcomes Model (POM) Relationship Between Counterfactual States Causality Empirical Strategies in Labor Economics, Angrist Krueger (1999): The most challenging empirical questions in economics

More information

Econometric Analysis of Cross Section and Panel Data

Econometric Analysis of Cross Section and Panel Data Econometric Analysis of Cross Section and Panel Data Jeffrey M. Wooldridge / The MIT Press Cambridge, Massachusetts London, England Contents Preface Acknowledgments xvii xxiii I INTRODUCTION AND BACKGROUND

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

Empirical Methods in Applied Microeconomics

Empirical Methods in Applied Microeconomics Empirical Methods in Applied Microeconomics Jörn-Ste en Pischke LSE November 2007 1 Nonlinearity and Heterogeneity We have so far concentrated on the estimation of treatment e ects when the treatment e

More information

Partial Identification of the Distribution of Treatment Effects

Partial Identification of the Distribution of Treatment Effects Partial Identification of the Distribution of Treatment Effects Brigham R. Frandsen Lars J. Lefgren October 5, 2016 Abstract This article develops bounds on the distribution of treatment effects under

More information

Research Statement. Zhongwen Liang

Research Statement. Zhongwen Liang Research Statement Zhongwen Liang My research is concentrated on theoretical and empirical econometrics, with the focus of developing statistical methods and tools to do the quantitative analysis of empirical

More information

A nonparametric test for seasonal unit roots

A nonparametric test for seasonal unit roots Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna To be presented in Innsbruck November 7, 2007 Abstract We consider a nonparametric test for the

More information