Experiments and Quasi-Experiments

Similar documents
Introduction to Econometrics

ECON Introductory Econometrics. Lecture 17: Experiments

Assessing Studies Based on Multiple Regression

Chapter Course notes. Experiments and Quasi-Experiments. Michael Ash CPPA. Main point of experiments: convincing test of how X affects Y.

Linear Regression with Multiple Regressors

Empirical approaches in public economics

Econometrics of causal inference. Throughout, we consider the simplest case of a linear outcome equation, and homogeneous

Linear Regression with Multiple Regressors

EMERGING MARKETS - Lecture 2: Methodology refresher

1 Impact Evaluation: Randomized Controlled Trial (RCT)

Recitation Notes 6. Konrad Menzel. October 22, 2006

Multiple Linear Regression CIVL 7012/8012

Hypothesis Tests and Confidence Intervals in Multiple Regression

Quantitative Economics for the Evaluation of the European Policy

Instrumental Variables

Introduction to Econometrics. Review of Probability & Statistics

Introduction to Econometrics

Addressing Analysis Issues REGRESSION-DISCONTINUITY (RD) DESIGN

Potential Outcomes Model (POM)

Treatment Effects. Christopher Taber. September 6, Department of Economics University of Wisconsin-Madison

Chapter 9: Assessing Studies Based on Multiple Regression. Copyright 2011 Pearson Addison-Wesley. All rights reserved.

6. Assessing studies based on multiple regression

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

Education Production Functions. April 7, 2009

Chapter 6: Linear Regression With Multiple Regressors

Causality and Experiments

ECON3150/4150 Spring 2015

Regression Discontinuity Designs.

Linear Regression with one Regressor

Introduction to Econometrics. Multiple Regression (2016/2017)

The Simple Linear Regression Model

Causal Inference with Big Data Sets

IV Estimation and its Limitations: Weak Instruments and Weakly Endogeneous Regressors

Introduction to Econometrics. Assessing Studies Based on Multiple Regression

Applied Statistics and Econometrics. Giuseppe Ragusa Lecture 15: Instrumental Variables

Job Training Partnership Act (JTPA)

Write your identification number on each paper and cover sheet (the number stated in the upper right hand corner on your exam cover).

Applied Statistics and Econometrics

Gov 2002: 4. Observational Studies and Confounding

Causal Hazard Ratio Estimation By Instrumental Variables or Principal Stratification. Todd MacKenzie, PhD

The F distribution. If: 1. u 1,,u n are normally distributed; and 2. X i is distributed independently of u i (so in particular u i is homoskedastic)

An example to start off with


Chapter 7. Hypothesis Tests and Confidence Intervals in Multiple Regression

ECNS 561 Multiple Regression Analysis

Introduction to Econometrics. Multiple Regression

Applied Statistics and Econometrics

Econometrics Problem Set 11

Development. ECON 8830 Anant Nyshadham

Introduction to causal identification. Nidhiya Menon IGC Summer School, New Delhi, July 2015

Chapter 11. Regression with a Binary Dependent Variable

Causal Inference with General Treatment Regimes: Generalizing the Propensity Score

8. Instrumental variables regression


Logistic regression: Why we often can do what we think we can do. Maarten Buis 19 th UK Stata Users Group meeting, 10 Sept. 2015

Ec1123 Section 7 Instrumental Variables

Instrumental Variables

Instrumental Variable Regression

Nonlinear Regression Functions

PSC 504: Instrumental Variables

Introduction to Econometrics Third Edition James H. Stock Mark W. Watson The statistical analysis of economic (and related) data

Recitation Notes 5. Konrad Menzel. October 13, 2006

Unless provided with information to the contrary, assume for each question below that the Classical Linear Model assumptions hold.

PSC 504: Differences-in-differeces estimators

Gov 2000: 9. Regression with Two Independent Variables

ECONOMETRICS HONOR S EXAM REVIEW SESSION

Econometrics in a nutshell: Variation and Identification Linear Regression Model in STATA. Research Methods. Carlos Noton.

Basic Linear Model. Chapters 4 and 4: Part II. Basic Linear Model

Empirical Methods in Applied Microeconomics

Introduction to Panel Data Analysis

Introduction to Econometrics

Differences-in- Differences. November 10 Clair

Mgmt 469. Causality and Identification

Econometrics Review questions for exam

Causal Modeling in Environmental Epidemiology. Joel Schwartz Harvard University

Introduction to Econometrics. Regression with Panel Data

Simple Regression Model. January 24, 2011

Econometrics Honor s Exam Review Session. Spring 2012 Eunice Han

Instrumental Variables in Action: Sometimes You get What You Need

Answers to End-of-Chapter Review the Concepts Questions

Final Exam. Economics 835: Econometrics. Fall 2010

Honors General Exam Part 3: Econometrics Solutions. Harvard University

Causal Inference Lecture Notes: Causal Inference with Repeated Measures in Observational Studies

Controlling for Time Invariant Heterogeneity

An explanation of Two Stage Least Squares

Introduction to Regression Analysis. Dr. Devlina Chatterjee 11 th August, 2017

1 Motivation for Instrumental Variable (IV) Regression

ECON Introductory Econometrics. Lecture 6: OLS with Multiple Regressors

New Developments in Econometrics Lecture 11: Difference-in-Differences Estimation

Basic econometrics. Tutorial 3. Dipl.Kfm. Johannes Metzler

Econometrics with Observational Data. Introduction and Identification Todd Wagner February 1, 2017

DO NOT CITE WITHOUT AUTHOR S PERMISSION:

Rockefeller College University at Albany

Multiple Regression. Midterm results: AVG = 26.5 (88%) A = 27+ B = C =


Causal Inference. Prediction and causation are very different. Typical questions are:

MS&E 226: Small Data. Lecture 6: Bias and variance (v2) Ramesh Johari

WORKSHOP ON PRINCIPAL STRATIFICATION STANFORD UNIVERSITY, Luke W. Miratrix (Harvard University) Lindsay C. Page (University of Pittsburgh)

Binary Dependent Variable. Regression with a

14.74 Lecture 10: The returns to human capital: education

Transcription:

Experiments and Quasi-Experiments (SW Chapter 13) Outline 1. Potential Outcomes, Causal Effects, and Idealized Experiments 2. Threats to Validity of Experiments 3. Application: The Tennessee STAR Experiment 4. Quasi-Experiments: Differences-in-Differences, IV Estimation, Regression Discontinuity Design, and Other Approaches. 5. Threats to Validity of Quasi-Experiments 6. Heterogeneous Causal Effects SW Ch. 13 1/68

Why study experiments? Ideal randomized controlled experiments provide a conceptual benchmark for assessing observational studies. Actual experiments are rare ($$$) but influential. Experiments can overcome the threats to internal validity of observational studies, however they have their own threats to internal and external validity. Thinking about experiments helps us to understand quasiexperiments, or natural experiments, in natural variation induces as if random assignment. SW Ch. 13 2/68

Terminology: experiments and quasi-experiments An experiment is designed and implemented consciously by human researchers. An experiment randomly assigns subjects to treatment and control groups (think of clinical drug trials) A quasi-experiment or natural experiment has a source of randomization that is as if randomly assigned, but this variation was not the result of an explicit randomized treatment and control design. Program evaluation is the field of statistics aimed at evaluating the effect of a program or policy, for example, an ad campaign to cut smoking, or a job training program. SW Ch. 13 3/68

Different Types of Experiments: Three Examples Clinical drug trial: does a proposed drug lower cholesterol? o Y = cholesterol level o X = treatment or control group (or dose of drug) Job training program (Job Training Partnership Act) o Y = has a job, or not (or Y = wage income) o X = went through experimental program, or not Class size effect (Tennessee class size experiment) o Y = test score (Stanford Achievement Test) o X = class size treatment group (regular, regular + aide, small) SW Ch. 13 4/68

Potential Outcomes, Causal Effects, and Idealized Experiments (SW Section 13.1) A treatment has a causal effect for a given individual: give the individual the treatment and something happens, which is (possibly) different than what happens if you don t get the treatment. A potential outcome is the outcome for an individual under a potential treatment or potential non-treatment. For an individual, the causal effect is the difference in potential outcomes if you do or don t get the treatment. An individual s causal effect cannot be observed because you can give the subject the treatment, or not but not both! SW Ch. 13 5/68

Average Treatment Effect In general, different people have different treatment effects. For people drawn from a population, the average treatment effect is the population mean value of the individual treatment effects. For now, consider the case of a single treatment effect that everyone s treatment effect is the same in the population under study. SW Ch. 13 6/68

Estimating the average treatment effect in an ideal randomized controlled experiment An ideal randomized controlled experiment randomly assigns subjects to treatment and control groups. Let X be the treatment variable and Y the outcome variable of interest. If X is randomly assigned (for example by computer) then X is independent of all individual characteristics. Thus, in the regression model, Y i = 0 + 1 X i + u i, if X i is randomly assigned, then X i is independent of u i, so E(u i X i ) = 0, so OLS yields an unbiased estimator of 1. The causal effect is the population value of 1 in an ideal randomized controlled experiment SW Ch. 13 7/68

Estimating the average treatment effect, ctd. Y i = 0 + 1 X i + u i When the treatment is binary, ˆ 1 is just the difference in mean outcome (Y) in the treatment vs. control group treated control ( Y Y ). This difference in means is sometimes called the differences estimator. SW Ch. 13 8/68

From potential outcomes to regression: the math Consider subject i drawn at random from a population and let: X i = 1 if subject i treated, 0 if not (binary treatment) Y i (0) = potential outcome for subject i if untreated Y i (1) = potential outcome for subject i if treated We observe (Y i, X i ), where Y i is the observed outcome: Y i = Y i (1)X i + Y i (0)(1 X i ) (think about it) = Y i (0) + [Y i (1) Y i (0)]X i (algebra) = E[Y i (0)] + [Y i (1) Y i (0)]X i + [Y i (0) E(Y i (0))] (more algebra) where the expectation is over the population distribution. SW Ch. 13 9/68

Potential outcomes to regression, ctd. Thus where Y i = E[Y i (0)] + [Y i (1) Y i (0)]X i + [Y i (0) E(Y i (0))] = 0i + 1i X i + u i 0 = E[Y i (0)] 1i = Y i (1) Y i (0) = individual causal effect u i = Y i (0) E(Y i (0)), so Eu i = 0. When there is a single treatment effect the case we consider here then 1i = 1 so we obtain the usual regression model, Y i = 0 + 1 X i + u i where 1 is the treatment (causal) effect. SW Ch. 13 10/68

Additional regressors Let X i = treatment variable and W i = control variable(s). Y i = 0 + 1 X i + 2 W i + u i Two reasons to include W in a regression analysis of the effect of a randomly assigned treatment: 1. If X i is randomly assigned then X i is uncorrelated with W i so omitting W i doesn t result in omitted variable bias. But including W i reduces the error variance and can result in smaller standard errors. 2. If the probability of assignment depends on W i, so that X i is randomly assigned given W i, then omitting W i can lead to OV bias, but including it eliminates that OV bias. This situation is called SW Ch. 13 11/68

Randomization based on covariates Example: men (W i = 0) and women (W i = 1) are randomly assigned to a course on table manners (X i ), but women are assigned with a higher probability than men. Suppose women have better table manners than men prior to the course. Then even if the course has no effect, the treatment group will have better post-course table manners than the control group because the treatment group has a higher fraction of women than the control group. That is, the OLS estimator of 1 in the regression, Y i = 0 + 1 X i + u i has omitted variable bias, which is eliminated by the regression, Y i = 0 + 1 X i + 2 W i + u i SW Ch. 13 12/68

Randomization based on covariates, ctd. Y i = 0 + 1 X i + 2 W i + u i In this example, X i is randomly assigned, given W i, so E(u i X i, W i ) = E(u i W i ). o In words, among women, treatment is randomly assigned, so among women, the error term is independent of X i so, among women, its mean doesn t depend on X i. Same is true among men. Thus if randomization is based on covariates, conditional mean independence holds, so that once W i is included in the regression the OLS estimator is unbiased as was discussed in Ch. 7. SW Ch. 13 13/68

Estimating causal effects that depend on observables The causal effect in the previous example might depend on observables, perhaps 1,men > 1,women (men could benefit more from the course than women). We already know how to estimate different coefficients for different groups use interactions. In the table manners example, we would simply estimate the interactions model, Y i = 0 + 1 X i + 2 X i W i + 3 W i + u i Because interactions were covered in Ch. 8, for simplicity in Ch. 13 we ignore differences in causal effects that depend on observable W s. We return to differences in 1i s (unobserved heterogeneity in contrast to heterogeneity that depends on observable variables like gender) below. SW Ch. 13 14/68

Threats to Validity of Experiments (SW Section 13.2) Threats to Internal Validity 1. Failure to randomize (or imperfect randomization) for example, openings in job treatment program are filled on first-come, first-serve basis; latecomers are controls result is correlation between X and u SW Ch. 13 15/68

Threats to internal validity, ctd. 2. Failure to follow treatment protocol (or partial compliance ) some controls get the treatment some of those who should be treated aren t If you observe whether the subject actually receives treatment (X), if you know whether the individual was initially assigned to a treatment group (Z), and if initial assignment was random, then you can estimate the causal effect using initial assignment as an instrument for actual treatment. SW Ch. 13 16/68

Threats to internal validity, ctd. 3. Attrition (some subjects drop out) Suppose the controls who get jobs move out of town; then corr(x,u) 0 This is a reincarnation of sample selection bias from Ch. 9 (the sample is selected in a way related to the outcome variable). SW Ch. 13 17/68

Threats to internal validity, ctd. 4. Experimental effects experimenter bias (conscious or subconscious): treatment X is associated with extra effort or extra care, so corr(x,u) 0 subject behavior might be affected by being in an experiment, so corr(x,u) 0 (Hawthorne effect) Just as in regression analysis with observational data, threats to the internal validity of regression with experimental data implies that corr(x,u) 0 so OLS (the differences estimator) is biased. SW Ch. 13 18/68

Threats to External Validity 1. Nonrepresentative sample 2. Nonrepresentative treatment (that is, program or policy) 3. General equilibrium effects (effect of a program can depend on its scale; admissions counseling ) SW Ch. 13 19/68

Experimental Estimates of the Effect of Class Size Reductions (SW Section 13.3) Project STAR (Student-Teacher Achievement Ratio) 4-year study, $12 million Upon entering the school system, a student was randomly assigned to one of three groups: o regular class (22 25 students) o regular class + aide o small class (13 17 students) regular class students re-randomized after first year to regular or regular+aide Y = Stanford Achievement Test scores SW Ch. 13 20/68

Deviations from experimental design Partial compliance: o 10% of students switched treatment groups because of incompatibility and behavior problems how much of this was because of parental pressure? o Newcomers: incomplete receipt of treatment for those who move into district after grade 1 Attrition o students move out of district o students leave for private/religious schools o This is only a problem if their departure is related to Y i ; for example if high-achieving kids leave because they are assigned to a large class, then large classes will spuriously appear to do relatively worse (corr(u i,x i ) > 0) SW Ch. 13 21/68

Regression analysis The differences regression model: where Y i = 0 + 1 SmallClass i + 2 RegAide i + u i SmallClass i = 1 if in a small class RegAide i = 1 if in regular class with aide Additional regressors (W s) o teacher experience o free lunch eligibility o gender, race SW Ch. 13 22/68

Differences estimates (no W s) SW Ch. 13 23/68

SW Ch. 13 24/68

How big are these estimated effects? Put on same basis by dividing by std. dev. of Y Units are now standard deviations of test scores SW Ch. 13 25/68

How do these estimates compare to those from the California & Mass. observational studies? (Ch. 4 9) SW Ch. 13 26/68

A conditional mean independence example from STAR: What is the effect on Y of X = Teacher s years of experience? More on the design of Project STAR Teachers were randomly assigned to small/regular/reg+aide classrooms within their normal school teachers didn t change schools as part of the experiment. Because teacher experience differed systematically across schools (more experienced teachers in more affluent school districts), a regression of test scores on teacher experience would have omitted variable bias and the estimated effect on test scores of teacher experience would be biased up (overstated). SW Ch. 13 27/68

However, the design implies conditional mean independence: W = full set of school binary indicators Given W (school), X is randomly assigned (teachers are randomly assigned to classrooms and students) W is plausibly correlated with u (nonzero school fixed effects: some schools are richer than others) Thus E(u X) 0 but E(u X,W) = E(u W) (conditional mean independence) The key is that teacher randomization is stratified by school: X is randomly assigned given W. The coefficient on the school identity (W) is not a causal effect (think about it) SW Ch. 13 28/68

SW Ch. 13 29/68

Example: effect of teacher experience, ctd. Without school fixed effects (2), the estimated effect of an additional year of experience is 1.47 (SE =.17) Controlling for the school in (3), the estimated effect of an additional year of experience is.74 (SE =.17) Does the difference between (2) and (3) make sense? The OLS estimator of the coefficient on years of experience is biased up without school effects; with school effects, OLS is an unbiased estimator of the causal effect of X. SW Ch. 13 30/68

Another example of a well-done randomized controlled experiment: Program Evaluation of Teach for America Full report is available (follow links) at: http://www.teachforamerica.org/mathematica.html SW Ch. 13 31/68

Summary: The Tennessee Class Size Experiment Remaining threats to internal validity partial compliance/incomplete treatment o can use TSLS with Z = initial assignment o Turns out, TSLS and OLS estimates are similar (Krueger (1999)), so this bias seems not to be large Main findings: The effects are small quantitatively (same size as gender difference) Effect is sustained but not cumulative or increasing biggest effect at the youngest grades SW Ch. 13 32/68

Quasi-Experiments (SW Section 13.4) A quasi-experiment or natural experiment has a source of randomization that is as if randomly assigned, but this variation was not the result of an explicit randomized treatment and control design. SW Ch. 13 33/68

Two types of quasi-experiments a) Treatment (X) is as if randomly assigned (perhaps conditional on some control variables W) Example: Effect of marginal tax rates on labor supply X = marginal tax rate (the tax rate changes in one state, not another; state is as if randomly assigned) b) A variable (Z) which influences receipt of treatment (X) is as if randomly assigned, so we can run IV and use Z as an instrument for X. Example: Effect on survival of cardiac catheterization X = cardiac catheterization; Z = differential distance to CC hospital SW Ch. 13 34/68

Econometric methods (a) Treatment (X) is as if randomly assigned: OLS The differences-in-differences estimator uses two pre- and post-treatment measurements of Y, and estimates the treatment effect as the difference between the pre- and posttreatment values of Y for the treatment and control groups. Let: before Y = value of Y for subject i before the expt i after i Y = value of Y for subject i after the expt SW Ch. 13 35/68

The differences-in-differences estimator, ctd. diffs in diffs treat, after treat, before control, after ˆ 1 = ( Y Y ) ( Y control, before Y ) SW Ch. 13 36/68

The differences-in-differences estimator, ctd. Differences regression formulation: Y i = 0 + 1 X i + u i where after before Y i = Y i Y i X i = 1 if treated, = 0 otherwise ˆ 1 is the diffs-in-diffs estimator The differences-in-differences estimator allows for systematic differences in pre-treatment characteristics, which can happen in a quasi-experiment because treatment is not randomly assigned. SW Ch. 13 37/68

Differences-in-differences with control variables Y it = 0 + 1 X it + 2 W 1it + + 1+r W rit + u it, X it = 1 if the treatment is received, = 0 otherwise Why include control variables? For the usual reason: If the treatment (X) is as if randomly assigned, given W, then u is conditionally mean independent of X: E(u X, W) = E(u W) and including W results in the OLS estimator of 1 being unbiased. SW Ch. 13 38/68

Differences-in-differences with multiple time periods The drunk driving law analysis of Ch. 10 can be thought of as a quasi-experiment panel data design: if (given the control variables) the beer tax is as if randomly assigned, then the causal effect of the beer tax (the elasticity) can be estimated by panel data regression. The tools of Ch. 10 apply. Ignoring W s, the differences-in-differences estimator obtains from including individual fixed effects and time effects: Y it = i + δ t + 1 X it + u it. If T = 2 (2 periods) and the treatment is in the second period, then X it = X it (why?) and the fixed effects/time effects regression becomes Y it = 0 + 1 X it + u it SW Ch. 13 39/68

IV estimation If a variable (Z) that influences treatment (X) is as if randomly assigned, conditional on W, then Z can be used as an instrumental variable for X in an IV regression that includes the control variables W. We encountered this in Ch. 12 (IV regression). The concept of as-if randomization has proven to be a fruitful way to think of instrumental variables. One example is the location of a heart attack in the cardiac catheterization study (the location being as if randomly assigned) SW Ch. 13 40/68

Regression Discontinuity Estimators If treatment occurs when some continuous variable W crosses a threshold w 0, then you can estimate the treatment effect by comparing individuals with W just below the threshold (treated) to these with W just above the threshold (untreated). If the direct effect on Y of W is continuous, the effect of treatment should show up as a jump in the outcome. The magnitude of this jump estimates the treatment effect. In sharp regression discontinuity design, everyone above (or below) the threshold w 0 gets treatment. In fuzzy regression discontinuity design, crossing the threshold w 0 influences the probability of treatment, but that probability is between 0 and 1. SW Ch. 13 41/68

Sharp Regression Discontinuity Everyone with W < w 0 gets treated, so X i = 1 if W i < w 0 and X i = 0 otherwise. The treatment effect, 1, can be estimated by OLS: Y i = 0 + 1 X i + 2 W i + u i If crossing the threshold affects Y i only through the treatment, then E(u i X i, W i ) = E(u i W i ) so ˆ 1 is unbiased. SW Ch. 13 42/68

Sharp regression discontinuity design in a picture: Treatment occurs for everyone with W < w 0, and the treatment effect is the jump or discontinuity. SW Ch. 13 43/68

Fuzzy Regression Discontinuity Let X i = binary treatment variable Z i = 1 if W < w 0 and Z i = 0 otherwise. If crossing the threshold has no direct effect on Y i, so only affects Y i by influencing the probability of treatment, then E(u i Z i, W i ) = 0. Thus Z i is an exogenous instrument for X i. Example: Matsudaira, Jordan D. (2008). Mandatory Summer School and Student Achievement. Journal of Econometrics 142: 829-850. This paper studies the effect of mandatory summer school by comparing subsequent perormance of students who fell just below, and just above, the grade cutoff at which summer school was required. SW Ch. 13 44/68

Potential Problems with Quasi-Experiments (SW Section 13.5) The threats to the internal validity of a quasi-experiment are the same as for a true experiment, with one addition. 1. Failure to randomize (imperfect randomization) a. Is the as if randomization really random, so that X (or Z) is uncorrelated with u? 2. Failure to follow treatment protocol 3. Attrition (n.a.) 4. Experimental effects (n.a.) 5. Instrument invalidity (relevance + exogeneity) (Maybe healthier patients do live closer to CC hospitals they might have better access to care in general) SW Ch. 13 45/68

The threats to the external validity of a quasi-experiment are the same as for an observational study. 1. Nonrepresentative sample 2. Nonrepresentative treatment (that is, program or policy) Example: Cardiac catheterization The CC study has better external validity than controlled clinical trials because the CC study uses observational data based on real-world implementation of cardiac catheterization. However they used data from the early 90s do the findings apply to CC usage today? SW Ch. 13 46/68

Experimental and Quasi-Experimental Estimates in Heterogeneous Populations (SW Section 13.6) By a heterogeneous population we mean a population in which the treatment effect differs from one person to the next. In the potential outcome terminology, each individual s treatment effect is 1i = Y i (1) Y i (0), where Y i (1) is individual i s potential outcome if treated and Y i (0) is i s potential outcome if untreated. In general, 1i differs across people in unobservable ways: o Effect of job training program probably depends on motivation o Effect of a cholesterol-lowering drug could depend on unobserved health factors SW Ch. 13 47/68

Population heterogeneity, ctd. 1i = Y i (1) Y i (0) = person i s treatment effect If this variation depends on observed variables, then this is a job for interaction variables! We know how to do this already. What if the source of variation is unobserved? This raises two questions: 1. What do we want to estimate? 2. What do our usual tools (OLS, IV) deliver when there is population heterogeneity? Do OLS and IV give us what we want, or something different? SW Ch. 13 48/68

Population heterogeneity, ctd. 1. What do we want to estimate? Most commonly, we want to estimate the average treatment effect in the population: 1 = E 1i = E[Y i (1) Y i (0)] This is the mean outcome we would get if everyone in the population were treated (all high-cholesterol patients took the drug; all qualified unemployed took the job training program). Another possibility is the effect of treatment on the treated, that is, the treatment effect for those two get treated: E[ 1i X i = 1]. We will focus on the average treatment effect. SW Ch. 13 49/68

2. What do our usual tools (OLS, IV) deliver when there is population heterogeneity? (a) OLS with heterogeneity and random assignment Recall that, starting with the definitions of potential outcomes, we derived the regression model, Y i = 0 + 1i X i + u i (from above) = 0 + 1 X i + ( 1i 1 )X i + u i (algebra) = 0 + 1 X i + v i (redefinition) where v i = ( 1i 1 )X i + u i 1 = E 1i = average treatment effect. If E(v i X i ) = 0, then ˆ 1 is an unbiased estimator of the average treatment effect, 1 SW Ch. 13 50/68

OLS with heterogeneity and random assignment, ctd. Y i = 0 + 1 X i + v i, where v i = ( 1i 1 )X i + u i Check: E(v i X i ) = E[( 1i 1 )X i + u i X i ] = E[( 1i 1 )X i X i ] + E(u i X i ) = E[( 1i 1 ) X i ] X i + E(u i X i ) = 0 because E[( 1i 1 ) X i ]= 0 and E(u i X i ) = 0, which both follow from X i being randomly assigned and therefore independent of all individual characteristics. Thus E( ˆ 1 ) = 1, so OLS does what we want it to! SW Ch. 13 51/68

(b) IV with heterogeneity and random assignment Suppose the treatment effect is heterogeneous and the effect of the instrument on X is heterogeneous: Y i = 0 + 1i X i + u i X i = 0 + 1i Z i + v i (equation of interest) (first stage of TSLS) In general, TSLS estimates the causal effect for those whose value of X (probability of treatment) is most influenced by the instrument which is called the Local Average Treatment Effect (LATE) SW Ch. 13 52/68

IV with heterogeneous causal effects, ctd. Y i = 0 + 1i X i + u i X i = 0 + 1i Z i + v i (equation of interest) (first stage of TSLS) Intuition: If for some people 1i = 0, then their predicted value of X i wouldn t depend on Z, so the IV estimator would ignore them. The IV estimator puts most of the weight on individuals for whom Z has a large influence on X. TSLS measures the treatment effect for those whose probability of treatment is most influenced by X. SW Ch. 13 53/68

The math Y i = 0 + 1i X i + u i X i = 0 + 1i Z i + v i (equation of interest) (first stage of TSLS) To simplify things, suppose: 1i and 1i are distributed independently of (u i,v i,z i ) E(u i Z i ) = 0 and E(v i Z i ) = 0 E( 1i ) 0 Then ˆ TSLS p E( 1i 1i) 1 E( ) 1i (derived in SW App. 13.4) TSLS estimates the causal effect for those individuals for whom Z is most influential (those with large 1i ). SW Ch. 13 54/68

The Local Average Treatment Effect (LATE): ˆ TSLS p E( 1i 1i) 1 E( ) 1i TSLS estimates the causal effect for those individuals for whom Z is most influential (those with large 1i ). E( 1i 1i), is called the local average treatment effect E( ) 1i (LATE) it is the average treatment effect for those in a local region who are most heavily influenced by Z. In general, LATE is neither the average treatment effect nor the effect of treatment on the treated SW Ch. 13 55/68

LATE, ctd ˆ TSLS p E( 1i 1i) 1 E( ) Recall the covariance fact, 1i = LATE so or E( 1i 1i) = 1 i E( ) E( 1 i ) + cov( 1 i, 1i) E( 1i 1i) E( 1 i ) E( 1 i ) cov( 1 i, 1 i ) LATE = = E( 1 i ) E( 1 i ) cov( 1 i, 1 i) = E( 1 i ) E( ) 1i cov( 1 i, 1 i) LATE = ATE + E( ) 1i SW Ch. 13 56/68

LATE, ctd cov( 1 i, 1 i) LATE = ATE + E( ) If the treatment effect is large for individuals for whom the effect of the instrument is also large, then cov( 1 i, 1i) > 0 and LATE > ATE (if E( 1 i ) > 0). In the binary case, LATE is the treatment effect for those whose probability of receipt of treatment is most heavily influenced by Z. o If you always (or never) get treated, you don t show up in limit of the IV estimator (in LATE) 1i SW Ch. 13 57/68

LATE, ctd. When there are heterogeneous causal effects, what TSLS estimates depends on the choice of instruments! With different instruments, TSLS estimates different weighted averages! Suppose you have two valid instruments, Z 1 and Z 2. o In general these instruments will be influential for different members of the population. o Using Z 1, TSLS will estimate the treatment effect for those people whose probability of treatment (X) is most influenced by Z 1 o The LATE for Z 1 might differ from the LATE for Z 2 o If so, the J-statistic will tend to reject even if both Z 1 and Z 2 are exogenous! (Why?) SW Ch. 13 58/68

When does TSLS estimate the average causal effect? Y i = 0 + 1i X i + u i (equation of interest) X i = 0 + 1i Z i + v i (first stage of TSLS) ˆ TSLS p cov( 1 i, 1 i) 1 LATE = ATE + E( ) LATE = the average treatment effect (a.k.a. average causal effect), that is ˆ TSLS p 1 E( 1i ), if: (a) If 1i = 1 (no heterogeneity in equation of interest); or (b) If 1i = 1 (no heterogeneity in first stage equation); or (c) If 1i and 1i vary but are independently distributed. ˆ TSLS Otherwise, 1 does not estimate E( 1i ) Whether this is important depends on the application 1i SW Ch. 13 59/68

LATE, ctd: Example #1: Cigarette elasticity Y i = Cigarette consumption in state i X i = Price per pack in state i Z i = cigarette tax in state i 1i = price elasticity in state i 1i = effect of cigarette tax on price in state i LATE = the average causal effect if: (a) If 1i = 1 (no heterogeneity in equation of interest); or (b) If 1i = 1 (no heterogeneity in first stage equation); or (c) If 1i and 1i vary but are independently distributed. How close is LATE to the average causal effect in this example? SW Ch. 13 60/68

LATE, ctd: Example #2: Cardiac catheterization Y i = survival time (days) for AMI patients X i = received cardiac catheterization (or not) Z i = differential distance to CC hospital Equation of interest: SurvivalDays i = 0 + 1i CardCath i + u i First stage (linear probability model): CardCath i = 0 + 1i Distance i + v i For whom does distance have the great effect on the probability of treatment? For those patients, what is their causal effect 1i? SW Ch. 13 61/68

LATE and cardiac catheterization example, ctd. Equation of interest: SurvivalDays i = 0i + 1i CardCath i + u i First stage (linear probability model): CardCath i = 0i + 1i Distance i + v i LATE = the average causal effect if: (a) If 1i = 1 (no heterogeneity in equation of interest); or (b) If 1i = 1 (no heterogeneity in first stage equation); or (c) If 1i and 1i vary but are independently distributed. How close is LATE to the average causal effect in this example? SW Ch. 13 62/68

LATE and cardiac catheterization example, ctd. TSLS estimates the causal effect for those whose value of X i is most heavily influenced by Z i TSLS estimates the causal effect for those for whom distance most influences the probability of treatment: What is their causal effect? o If in the expert judgment of the EMT, CC wouldn t have substantial benefits, relative to the cost of making a longer trip, then they should just go to the closest hospital. These are the patients who are most heavily influenced by relative distance, so it is their LATE that is being estimated. This is a plausible explanation of why the TSLS estimate is smaller than the clinical trial OLS estimate. SW Ch. 13 63/68

Heterogeneous Causal Effects: Summary Heterogeneous causal effects means that the causal (or treatment) effect varies across individuals. The average treatment effect is the average value in the population, E( 1i ). When these differences depend on observable variables, heterogeneous causal effects can be estimated using interactions (nothing new here). When differences in 1i are unobservable, then the behavior of OLS and IV can change. o If X i is randomly assigned, then OLS is consistent. o If Z i is (as-if) randomly assigned, then IV estimates the LATE, which depends on the instrument SW Ch. 13 64/68

Summary: Experiments and Quasi-Experiments (SW Section 13.7) Ideal experiments and potential outcomes The average treatment effect is the population mean of the individual treatment effect, which is the difference in potential outcomes when treated and not treated. The treatment effect estimated in an ideal randomized controlled experiment is unbiased for the average treatment effect. SW Ch. 13 65/68

Summary, ctd. Actual experiments Actual experiments have threats to internal validity Depending on the threat, these threats to internal validity can be addressed by: o panel data regression (differences-in-differences) o multiple regression (including control variables), and o IV (using initial assignment as an instrument, possibly with control variables) External validity also can be an important threat to the validity of experiments SW Ch. 13 66/68

Summary, ctd. Quasi-experiments Quasi-experiments have an as-if randomly assigned source of variation. This as-if random variation can generate: o X i which plausibly satisfies E(u i X i ) = 0 (so estimation proceeds using OLS); or o instrumental variable(s) which plausibly satisfy E(u i Z i ) = 0 (so estimation proceeds using TSLS) Quasi-experiments also have threats to internal validity SW Ch. 13 67/68

Summary, ctd. Population heterogeneity As used here, population heterogeneity mean differences in individual causal effects that are unrelated to observables If X i is randomly assigned, then OLS estimates the average causal effect. If E(u i Z i ) = 0, then TSLS estimates the local average treatment effect (LATE), which is the average effect of treatment on those most influenced by Z i. SW Ch. 13 68/68