Potential Outcomes and Causal Inference

Size: px
Start display at page:

Download "Potential Outcomes and Causal Inference"

Transcription

1 Potential Outcomes and Causal Inference PUBL0050 Week 1 Jack Blumenau Department of Political Science UCL 1 / 47

2 Lecture outline Causality and Causal Inference Course Outline and Logistics The Potential Outcomes Framework (Intuition) The Potential Outcomes Framework (Detail) 2 / 47

3 Causality and Causal Inference

4 Knowledge and causality We do not have knowledge of a thing until we have grasped its why, that is to say, its cause. Aristotle, Physics 3 / 47

5 What are we doing in this course? We will be learning to make inferences about causal relationships: Causality The relationship between events where one set of events (the effects) is a direct consequence of another set of events (the causes). Causal inference The process by which one can use data to make claims about causal relationships. 4 / 47

6 What are we doing in this course? What does this mean in practice? Measuring the effect of some treatment on some outcome What is the effect of canvassing on vote intention? What is the effect of class size on test scores? What is the effect of peace-keeping missions on peace? What is the effect of austerity on support for Brexit? What is the effect of trade on support for Trump? What is the effect of the minimum wage on employment? What is the effect of the number of children on the risk of divorce? etc 5 / 47

7 Causal inference is hard I would rather discover one true cause than gain the Kingdom of Persia Democritus, BC 6 / 47

8 Causal inference and regression A reasonable question is: how is this different from regression, which also measures the effect of X on Y? Regression can play an important role in answering causal questions, but not always. We will see when regression serves as a useful descriptive tool, and when β k can be given a causal interpretation. 7 / 47

9 Course Outline and Logistics

10 Course outline 1: Potential Outcomes and Causal Inference 2: Randomized Experiments 3: Selection on Observables (I) 4: Selection on Observables (II) 5: Panel Data and Difference-in-Differences 6: Synthetic Control 7: Instrumental Variables (I) 8: Instrumental Variables (II) 9: Regression Discontinuity Designs 10: Overview and Review 8 / 47

11 Teaching staff Jack Blumenau Office: B.13, 29/30 Tavistock Square Office Hours: Friday 10:00-12:00 (by appointment) Philipp Broniecki Office: SPP Seminar room G.03, 29/30 Tavistock Square Office Hours: Wednesday 15:00-17:00 (by appointment) Book office hours via Moodle SPPBook 9 / 47

12 Prerequisites An introductory course in statistics/quantitative methods You should be familiar with (at least) t-tests and linear regression You should know what this is (and know how to interpret it): Y i = α + β 1 X 1 + β 2 X 2 + ɛ i If you are unsure, you could take a look at this website, and see how much of the material you recognise You can also speak to me 10 / 47

13 Assessment and key deadlines 100% of the assessment for this course will be via a 3000 word research paper in which you will apply the methods you have learnt to your own research question. I will provide guidance on the structure of these papers I will provide feedback on a one-page summary during the term This is very good preparation for your dissertations Key deadlines: December 7th: One page summary with a) clear research question; b) outline of methodological design; c) identification of data source(s) January 9th: Final project due 11 / 47

14 Learning objectives This course will provide you with: An understanding of the assumptions required to support causal claims made with quantitative data An overview of the most commonly used statistical methods which aim to estimate causal effects The practical skills required to implement these methods using R 12 / 47

15 Course Texts (I) 13 / 47

16 Course Texts (II) 14 / 47

17 Online course resources Course Website: Lecture summaries Seminar tasks Homeworks (set in class, solutions released on Monday) Moodle: (Key: regression) Mostly just links to the website Piazza: piazza.com/ucl.ac.uk/fall2018/publ0050/home All course-material related questions should be posted on Piazza is for administrative questions. 15 / 47

18 Seminars Seminar structure 1. Lecture review 2. Seminar tasks 3. Homework Seminar times: 1. Seminar 1: Friday, 10-11am, 26 Bedford Way, G11 2. Seminar 2: Friday, 11-12pm, 26 Bedford Way, G11 3. Seminar 3: Friday, 12-1pm, 26 Bedford Way, 316 If you ve not been assigned a seminar yet: 1. Surname A-H Seminar 1 2. Surname I-Q Seminar 2 3. Surname R-Z Seminar 3 16 / 47

19 The Potential Outcomes Framework (Intuition)

20 The Road Not Taken Two roads diverged in a yellow wood, And sorry I could not travel both And be one traveler, long I stood And looked down one as far as I could To where it bent in the undergrowth;... Two roads diverged in a wood, and I I took the one less traveled by, And that has made all the difference. Robert Frost ( ) 17 / 47

21 The Road Not Taken Frost s traveller highlights the central conundrum of causal analysis: An individual ( one traveler ) cannot take an action and the opposite action at the same time ( I could not travel both ) Once an action is taken, we therefore cannot tell what the world would have been like if an alternative action had been taken I.e. we cannot tell what is at the end of the road not taken. Frost also does what most of us do: We cannot observe the counterfactual outcomes that would have resulted from taking another path But we can imagine ( And that has made all the difference ) The Potential Outcomes Framework A language for describing the types of comparisons that Frost is making. A potential outcome is what happens at the end of each of the roads that the traveller might take. 18 / 47

22 Potential outcomes What does it mean for a causal factor to affect an outcome for an individual? e.g. time spent studying e.g. final PUBL0050 grade e.g. you In the counterfactual approach to causation: if the outcome for the individual would be different for different hypothetical values of the causal factor. These hypothetical outcomes (assosiated with hypothetical values of the causal factor) are known as potential outcomes. 19 / 47

23 Potential outcomes illustration Imagine we knew the grade a particular individual would receive for different amounts of study time: Each point on the line represents a potential outcome Each point tells us the hypothetical outcome associated with a hypothetical value of our causal factor Final Grade Hours of Study 20 / 47

24 Potential outcomes illustration Imagine we knew the grade a particular individual would receive for different amounts of study time: In this framework, causal effects are defined in terms of these potential outcomes If we could observe all of these outcomes for an individual, we could quantify the effect of spending 10 hours per week studying rather than 5 the average effect of spending an additional hour studying and so on Hours of Study Final Grade / 47

25 Fundamental Problem of Causal Inference For any given unit/individual we only observe one potential outcome: Final Grade Final Grade Hours of Study 12 Hours of Study To make causal inferences we have to impute (make educated guesses about) the other unrealised potential outcomes 22 / 47

26 Making comparisons across units Tempting: make comparisons of realised outcomes across units with different values of the causal factor This can lead to erroneous inferences (conclusions) Final Grade Person B Person A Hours of Study 23 / 47

27 What s the problem? 1. Heterogeneity: Different units have different potential outcomes 2. Confounding/endogeneity: when the values of the causal factor are systematically related to the potential outcomes (e.g. smarter students study more/less, students who benefit more from studying study more/less) These are the two central problems that the tools we study on the course aim to overcome. 24 / 47

28 Inference A central goal in statistics is inference: to draw conclusions about things we cannot see from things that we can see. Statistical inference The process of drawing conclusions about features/properties of the population on the basis of a sample. Causal inference The process of drawing conclusions about features/properties of the full set of potential outcomes on the basis of some observed outcomes. Generally we will be attempting to do both: using a sample to draw conclusions about the full set of potential outcomes in the population. 25 / 47

29 Break

30 The Potential Outcomes Framework (Detail)

31 Potential Outcomes History Counterfactual definitions of causality have a long intellectual history: If a person eats of a particular dish, and dies in consequence, that is, would not have died if he had not eaten of it, people would be apt to say that eating of that dish was the cause of his death John Stuart Mill, 1843 Neyman (1923)[1990] introduced the potential outcomes notation for experiments. Fisher (1925) proposed randomizing treatments to units. Rubin (1974) then extended the potential outcomes framework ( Rubin Causal Model, Holland, 1986) to observational studies 26 / 47

32 Potential outcomes notation (I) Warning All textbooks/articles use somewhat different notation for potential outcomes. We will be using the notation from Angrist and Pischke, Definition (Treatment) D i : Indicator of treatment intake for unit i { 1 if unit i received the treatment D i = 0 otherwise. D i denotes the treatment (causal variable) for unit i e.g. D i = 1: asprin; D i = 0: no asprin D i = 1: encouragement to vote; D i = 0: no encouragement to vote Defined for binary case, but will generalise to continuous treatments 27 / 47

33 Potential outcomes notation (II) Definition (Outcome) Y i : Observed outcome variable of interest for unit i Definition (Potential Outcome) Y 0i and Y 1i : Potential outcomes for unit i { Y1i Potential outcome for unit i with treatment Y di = Potential outcome for unit i without treatment Y 0i If D i = 1, Y 0i is what would have been the outcome if D i had been 0 If D i = 0, Y 1i is what would have been the outcome if D i had been 1 potential outcomes are fixed attributes for each i and represent the outcome that would be observed hypothetically if i were treated/untreated 28 / 47

34 Potential outcomes notation (III) Given Y 0i and Y 1i, it is straightfoward to define a causal effect. Definition (Causal Effect) For each unit i, the causal effect of the treatment on the outcome is defined as the difference between its two potential outcomes: τ i Y 1i Y 0i τ i is the difference between two hypothetical states of the world One where i receives the treatment One where i does not receive the treatment 29 / 47

35 Potential outcomes notation (IIII) Assumption The outcome (Y i ) is connected to the potential outcomes (Y 0i,Y 1i ) via: Y i = D i Y 1i + (1 D i ) Y 0i so { Y1i if D Y i = i = 1 Y 0i if D i = 0 A priori each potential outcome could be observed After treatment, one outcome is observed, the other is counterfactual Definition (Fundamental Problem of Causal Inference) Cannot observe both potential outcomes (Y 1i, Y 0i ) for the same unit i 30 / 47

36 No interference Assumption Assumption Recall that observed outcomes are realized as Y i = D i Y 1i + (1 D i ) Y 0i The non-interference assumption, implies that potential outcomes for unit i are unaffected by treatment assignment for unit j Also known as the Stable Unit Treatment Value Assumption (SUTVA) Rules out interference among units Examples: Effect of GOTV on spouse s turnout Effect of medication when patients share drugs 31 / 47

37 Illustration: Individual treatment effects (τ i ) Imagine a population with 4 units, where we observe both potential outcomes for each unit: i Y i D i Y 1i Y 0i τ i If we know both potential outcomes for each unit, we can easily estimate the Average Treatment Effect: τ ATE E[Y 1i Y 0i ] = 1 N (Y 1i Y 0i ) N i=1 = E[Y 1i ] E[Y 0i ] ATE = = = / 47

38 Illustration: Fundemental Problem of Causal Inference We never actually observe both potential outcomes. i Y i D i Y 1i Y 0i τ i ?? ?? 3 0 0? 0? 4 1 0? 1? ATE =?+?+?+? 4 =? Estimating individual effects or average effects requires observing unobservable potential outcomes and therefore cannot be accomplished without additional assumptions. 33 / 47

39 Illustration: Selection bias One intuitive approach is to make comparisons across units using Y i. i Y i D i Y 1i Y 0i τ i ?? ?? 3 0 0? 0? 4 1 0? 1? i.e. Compare the average observed outcome under treatment to the average observed outcome under control. However, recalling that ATE = 1.25 Difference in means = = 3 so, in this example at least Difference in means ATE 34 / 47

40 Statistics concepts and notation To proceed to a more general statement of selection bias, we will need some key building blocks of statistics Population all the units of interest Sample a subset of the units in the population Variable (e.g. Y 1i, X i ) a numerical measure Random Variable a variable whose outcome is the result of a random process number on a six-sided dice treatment status of unit i in a randomized experiment Y1i for a unit randomly selected from the population 35 / 47

41 Expectations and conditional expectations Expectations: The expectation of a random variable X is denoted E[X], where E[X] xpr[x = x] This is the average outcome of the random variable e.g. for a six-sided dice Conditional expectations: E[X] = = 3.5 The conditional expectation of a variable refers to the expectation of that variable amongst some subgroup e.g. E[Y 1i D i = 1] gives the conditional expectation of the random variable Y 1i when D i = 1 36 / 47

42 Statistical terms Parameter/Estimand a fixed feature of the population (e.g. E[Y 1i ]) Sample statistic a random variable whose value depends on the sample (e.g. the sample mean, Ȳ ) Estimator any function of sample data used to estimate a parameter (e.g. 1 n ni=1 Y i ) Unbiased estimator An statistic is an unbiased estimator of a parameter if the estimate it produces is on average (i.e. across an infinite number of samples) equal to the parameter. E.g. E[ 1 n Y i ] = E[Y i ] (1) n i=1 37 / 47

43 Common parameters/estimands/quantities of interest Definition (Average treatment effect) τ ATE = E[Y 1i Y 0i ] Definition (Average treatment effect on the treated) τ ATT = E[Y 1i Y 0i D = 1] Definition (Average treatment effect on the controls) τ ATC = E[Y 1i Y 0i D = 0] Definition (Average treatment effects for subgroups) τ ATEX = E[Y 1i Y 0i X i = x] 38 / 47

44 Average treatment effect (ATE) Earlier, we said that an intuitive quantity of interest (e.g. the parameter) is the Average Treatment Effect: τ ATE E[Y 1i Y 0i ] = E[Y 1i ] E[Y 0i ] (2) i.e. how outcomes would change, on average, if every unit were to go from untreated to treated (defined by potential outcomes). For a given sample, one obvious estimator of the ATE is the difference in group means (DIGM). Label units such that D i = 1 for i {1, 2,..., m} and D i = 0 for i {m + 1, m + 2,..., n} DIGM 1 m m i=1 Y i 1 n m n i=m+1 Y i (3) i.e. the difference in observed outcomes between treatment and control Key question Is the DIGM an unbiased estimator for the ATE? 39 / 47

45 Is the DIGM an unbiased estimator for the ATE? Remember our toy example: i Y i D i Y 1i Y 0i τ i τ ATE = = 1.25 DIGM = = 3 Is this true generally? 40 / 47

46 Is the DIGM an unbiased estimator for the ATE? Let s redefine the DIGM using τ i Y 1i Y 0i : DIGM = 1 m Y 1i 1 n m n m = 1 m = 1 m i=1 m i=1 m i=1 (Y 0i + τ i ) 1 n m τ i + 1 m m i=1 Y 0i i=m+1 n Y 0i i=m+1 n Y 0i i=m+1 Y 0i 1 n m E[DIGM] = E[τ i D i = 1] + {E[Y 0i D i = 1] E[Y 0i D i = 0]} = τ ATT + selection bias where τ ATT is the average treatment effect for the treated group 41 / 47

47 Is the DIGM an unbiased estimator for the ATE? Remember our toy example: i Y i D i Y 1i Y 0i τ i τ ATE = = 1.25 τ ATT = = 2 Selection bias = = 1 τ ATT + Bias = = 3 = DIGM Implication DIGM is an unbiased estimator of τ ATE if 1. τ ATT = τ ATE and 2. E[Y 0i D = 1] = E[Y 0i D = 0] i.e. there is no selection bias 42 / 47

48 Does E[Y 0i D i = 1] normally equal E[Y 0i D i = 0]? Normally? No. Does job training improve employment outcomes? Participants are self-selected from a population of individuals in difficult labor situations Post-training period earnings for participants would be higher than those for nonparticipants even in the absence of the program i.e. E[Y 0 D = 1] E[Y 0 D = 0] > 0 Does asprin releive headache symptoms? Those who take asprin self-select from the population of individuals who are suffering from headaches Drug-takers would have worse headaches than non-drug-takers in in the absence of pain-relief i.e., E[Y0 D = 1] E[Y 0 D = 0] < 0 Implications 1. Selection into treatment is often associated with potential outcomes. 2. Selection bias can be positive or negative! 43 / 47

49 Assignment Mechanism To work out the treatment effects we are interested in, we need to know something about the potential outcomes that we do not observe In order to infer these unobservable outcomes, we make assumptions about the assignment mechanism Definition (Assignment Mechanism) The assignment mechanism is the procedure that determines which units are selected for treatment. Examples include: Random assignment Selection on observables Selection on unobservables 44 / 47

50 Design Trumps Analysis Randomized experiments and comparative observational studies do not form a dichotomy, but rather are on a continuum. Randomized Experiments Observational Studies Rubin, 2008 Selection on observables Regression Matching Weighting Selection on unobservables Difference-in-Differences and synthetic control Instrumental Variables Regression Discontinuity Design 45 / 47

51 The value of potential outcomes The strength of [the potential outcomes framework] is that it allows us to make these assumptions more explicit than they usually are. When they are explicitly stated, the analyst can then begin to look for ways to evaluate or partially test them. Holland, / 47

52 Summing up Potential outcomes: Causality is defined by potential outcomes, not by observed outcomes FPOCI: We only ever observe one potential outcome per unit Inference: Statistical inference learning about the population from a sample Causal inference learning about the population of potential outcomes from the observed outcomes Selection bias: The difference in means is only an unbiased estimator for the ATE when there is no selection bias 47 / 47

53 Next week: Randomized Experiments

Quantitative Economics for the Evaluation of the European Policy

Quantitative Economics for the Evaluation of the European Policy Quantitative Economics for the Evaluation of the European Policy Dipartimento di Economia e Management Irene Brunetti Davide Fiaschi Angela Parenti 1 25th of September, 2017 1 ireneb@ec.unipi.it, davide.fiaschi@unipi.it,

More information

Statistical Models for Causal Analysis

Statistical Models for Causal Analysis Statistical Models for Causal Analysis Teppei Yamamoto Keio University Introduction to Causal Inference Spring 2016 Three Modes of Statistical Inference 1. Descriptive Inference: summarizing and exploring

More information

Potential Outcomes Model (POM)

Potential Outcomes Model (POM) Potential Outcomes Model (POM) Relationship Between Counterfactual States Causality Empirical Strategies in Labor Economics, Angrist Krueger (1999): The most challenging empirical questions in economics

More information

Causal Inference Lecture Notes: Causal Inference with Repeated Measures in Observational Studies

Causal Inference Lecture Notes: Causal Inference with Repeated Measures in Observational Studies Causal Inference Lecture Notes: Causal Inference with Repeated Measures in Observational Studies Kosuke Imai Department of Politics Princeton University November 13, 2013 So far, we have essentially assumed

More information

Empirical approaches in public economics

Empirical approaches in public economics Empirical approaches in public economics ECON4624 Empirical Public Economics Fall 2016 Gaute Torsvik Outline for today The canonical problem Basic concepts of causal inference Randomized experiments Non-experimental

More information

Econometric Causality

Econometric Causality Econometric (2008) International Statistical Review, 76(1):1-27 James J. Heckman Spencer/INET Conference University of Chicago Econometric The econometric approach to causality develops explicit models

More information

1 Impact Evaluation: Randomized Controlled Trial (RCT)

1 Impact Evaluation: Randomized Controlled Trial (RCT) Introductory Applied Econometrics EEP/IAS 118 Fall 2013 Daley Kutzman Section #12 11-20-13 Warm-Up Consider the two panel data regressions below, where i indexes individuals and t indexes time in months:

More information

Research Design: Causal inference and counterfactuals

Research Design: Causal inference and counterfactuals Research Design: Causal inference and counterfactuals University College Dublin 8 March 2013 1 2 3 4 Outline 1 2 3 4 Inference In regression analysis we look at the relationship between (a set of) independent

More information

An Introduction to Causal Analysis on Observational Data using Propensity Scores

An Introduction to Causal Analysis on Observational Data using Propensity Scores An Introduction to Causal Analysis on Observational Data using Propensity Scores Margie Rosenberg*, PhD, FSA Brian Hartman**, PhD, ASA Shannon Lane* *University of Wisconsin Madison **University of Connecticut

More information

Potential Outcomes and Causal Inference I

Potential Outcomes and Causal Inference I Potential Outcomes and Causal Inference I Jonathan Wand Polisci 350C Stanford University May 3, 2006 Example A: Get-out-the-Vote (GOTV) Question: Is it possible to increase the likelihood of an individuals

More information

Introduction to Statistical Inference

Introduction to Statistical Inference Introduction to Statistical Inference Kosuke Imai Princeton University January 31, 2010 Kosuke Imai (Princeton) Introduction to Statistical Inference January 31, 2010 1 / 21 What is Statistics? Statistics

More information

Selection on Observables: Propensity Score Matching.

Selection on Observables: Propensity Score Matching. Selection on Observables: Propensity Score Matching. Department of Economics and Management Irene Brunetti ireneb@ec.unipi.it 24/10/2017 I. Brunetti Labour Economics in an European Perspective 24/10/2017

More information

EMERGING MARKETS - Lecture 2: Methodology refresher

EMERGING MARKETS - Lecture 2: Methodology refresher EMERGING MARKETS - Lecture 2: Methodology refresher Maria Perrotta April 4, 2013 SITE http://www.hhs.se/site/pages/default.aspx My contact: maria.perrotta@hhs.se Aim of this class There are many different

More information

Introduction to causal identification. Nidhiya Menon IGC Summer School, New Delhi, July 2015

Introduction to causal identification. Nidhiya Menon IGC Summer School, New Delhi, July 2015 Introduction to causal identification Nidhiya Menon IGC Summer School, New Delhi, July 2015 Outline 1. Micro-empirical methods 2. Rubin causal model 3. More on Instrumental Variables (IV) Estimating causal

More information

Section 1 : Introduction to the Potential Outcomes Framework. Andrew Bertoli 4 September 2013

Section 1 : Introduction to the Potential Outcomes Framework. Andrew Bertoli 4 September 2013 Section 1 : Introduction to the Potential Outcomes Framework Andrew Bertoli 4 September 2013 Roadmap 1. Preview 2. Helpful Tips 3. Potential Outcomes Framework 4. Experiments vs. Observational Studies

More information

Welcome to MAT 137! Course website:

Welcome to MAT 137! Course website: Welcome to MAT 137! Course website: http://uoft.me/ Read the course outline Office hours to be posted here Online forum: Piazza Precalculus review: http://uoft.me/precalc If you haven t gotten an email

More information

CompSci Understanding Data: Theory and Applications

CompSci Understanding Data: Theory and Applications CompSci 590.6 Understanding Data: Theory and Applications Lecture 17 Causality in Statistics Instructor: Sudeepa Roy Email: sudeepa@cs.duke.edu Fall 2015 1 Today s Reading Rubin Journal of the American

More information

Causality and Experiments

Causality and Experiments Causality and Experiments Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania April 13, 2009 Michael R. Roberts Causality and Experiments 1/15 Motivation Introduction

More information

Econometrics of causal inference. Throughout, we consider the simplest case of a linear outcome equation, and homogeneous

Econometrics of causal inference. Throughout, we consider the simplest case of a linear outcome equation, and homogeneous Econometrics of causal inference Throughout, we consider the simplest case of a linear outcome equation, and homogeneous effects: y = βx + ɛ (1) where y is some outcome, x is an explanatory variable, and

More information

Statistical Analysis of Randomized Experiments with Nonignorable Missing Binary Outcomes

Statistical Analysis of Randomized Experiments with Nonignorable Missing Binary Outcomes Statistical Analysis of Randomized Experiments with Nonignorable Missing Binary Outcomes Kosuke Imai Department of Politics Princeton University July 31 2007 Kosuke Imai (Princeton University) Nonignorable

More information

Instrumental Variables

Instrumental Variables Instrumental Variables Yona Rubinstein July 2016 Yona Rubinstein (LSE) Instrumental Variables 07/16 1 / 31 The Limitation of Panel Data So far we learned how to account for selection on time invariant

More information

Probability and Inference. POLI 205 Doing Research in Politics. Populations and Samples. Probability. Fall 2015

Probability and Inference. POLI 205 Doing Research in Politics. Populations and Samples. Probability. Fall 2015 Fall 2015 Population versus Sample Population: data for every possible relevant case Sample: a subset of cases that is drawn from an underlying population Inference Parameters and Statistics A parameter

More information

Advanced Statistical Methods for Observational Studies L E C T U R E 0 6

Advanced Statistical Methods for Observational Studies L E C T U R E 0 6 Advanced Statistical Methods for Observational Studies L E C T U R E 0 6 class management Problem set 1 is posted Questions? design thus far We re off to a bad start. 1 2 1 2 1 2 2 2 1 1 1 2 1 1 2 2 2

More information

Flexible Estimation of Treatment Effect Parameters

Flexible Estimation of Treatment Effect Parameters Flexible Estimation of Treatment Effect Parameters Thomas MaCurdy a and Xiaohong Chen b and Han Hong c Introduction Many empirical studies of program evaluations are complicated by the presence of both

More information

Gov 2002: 4. Observational Studies and Confounding

Gov 2002: 4. Observational Studies and Confounding Gov 2002: 4. Observational Studies and Confounding Matthew Blackwell September 10, 2015 Where are we? Where are we going? Last two weeks: randomized experiments. From here on: observational studies. What

More information

WORKSHOP ON PRINCIPAL STRATIFICATION STANFORD UNIVERSITY, Luke W. Miratrix (Harvard University) Lindsay C. Page (University of Pittsburgh)

WORKSHOP ON PRINCIPAL STRATIFICATION STANFORD UNIVERSITY, Luke W. Miratrix (Harvard University) Lindsay C. Page (University of Pittsburgh) WORKSHOP ON PRINCIPAL STRATIFICATION STANFORD UNIVERSITY, 2016 Luke W. Miratrix (Harvard University) Lindsay C. Page (University of Pittsburgh) Our team! 2 Avi Feller (Berkeley) Jane Furey (Abt Associates)

More information

Technical Track Session I: Causal Inference

Technical Track Session I: Causal Inference Impact Evaluation Technical Track Session I: Causal Inference Human Development Human Network Development Network Middle East and North Africa Region World Bank Institute Spanish Impact Evaluation Fund

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics STAT-S-301 Experiments and Quasi-Experiments (2016/2017) Lecturer: Yves Dominicy Teaching Assistant: Elise Petit 1 Why study experiments? Ideal randomized controlled experiments

More information

Difference-in-Differences Methods

Difference-in-Differences Methods Difference-in-Differences Methods Teppei Yamamoto Keio University Introduction to Causal Inference Spring 2016 1 Introduction: A Motivating Example 2 Identification 3 Estimation and Inference 4 Diagnostics

More information

The Econometric Evaluation of Policy Design: Part I: Heterogeneity in Program Impacts, Modeling Self-Selection, and Parameters of Interest

The Econometric Evaluation of Policy Design: Part I: Heterogeneity in Program Impacts, Modeling Self-Selection, and Parameters of Interest The Econometric Evaluation of Policy Design: Part I: Heterogeneity in Program Impacts, Modeling Self-Selection, and Parameters of Interest Edward Vytlacil, Yale University Renmin University, Department

More information

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences Random Variables Christopher Adolph Department of Political Science and Center for Statistics and the Social Sciences University

More information

Soc500: Applied Social Statistics Week 1: Introduction and Probability

Soc500: Applied Social Statistics Week 1: Introduction and Probability Soc500: Applied Social Statistics Week 1: Introduction and Probability Brandon Stewart 1 Princeton September 14, 2016 1 These slides are heavily influenced by Matt Blackwell, Adam Glynn and Matt Salganik.

More information

Lesson 19: Understanding Variability When Estimating a Population Proportion

Lesson 19: Understanding Variability When Estimating a Population Proportion Lesson 19: Understanding Variability When Estimating a Population Proportion Student Outcomes Students understand the term sampling variability in the context of estimating a population proportion. Students

More information

The problem of causality in microeconometrics.

The problem of causality in microeconometrics. The problem of causality in microeconometrics. Andrea Ichino University of Bologna and Cepr June 11, 2007 Contents 1 The Problem of Causality 1 1.1 A formal framework to think about causality....................................

More information

AAEC/ECON 5126 FINAL EXAM: SOLUTIONS

AAEC/ECON 5126 FINAL EXAM: SOLUTIONS AAEC/ECON 5126 FINAL EXAM: SOLUTIONS SPRING 2013 / INSTRUCTOR: KLAUS MOELTNER This exam is open-book, open-notes, but please work strictly on your own. Please make sure your name is on every sheet you

More information

Predicting the Treatment Status

Predicting the Treatment Status Predicting the Treatment Status Nikolay Doudchenko 1 Introduction Many studies in social sciences deal with treatment effect models. 1 Usually there is a treatment variable which determines whether a particular

More information

ECON Introductory Econometrics. Lecture 17: Experiments

ECON Introductory Econometrics. Lecture 17: Experiments ECON4150 - Introductory Econometrics Lecture 17: Experiments Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 13 Lecture outline 2 Why study experiments? The potential outcome framework.

More information

14.74 Lecture 10: The returns to human capital: education

14.74 Lecture 10: The returns to human capital: education 14.74 Lecture 10: The returns to human capital: education Esther Duflo March 7, 2011 Education is a form of human capital. You invest in it, and you get returns, in the form of higher earnings, etc...

More information

Technical Track Session I:

Technical Track Session I: Impact Evaluation Technical Track Session I: Click to edit Master title style Causal Inference Damien de Walque Amman, Jordan March 8-12, 2009 Click to edit Master subtitle style Human Development Human

More information

Matching. Quiz 2. Matching. Quiz 2. Exact Matching. Estimand 2/25/14

Matching. Quiz 2. Matching. Quiz 2. Exact Matching. Estimand 2/25/14 STA 320 Design and Analysis of Causal Studies Dr. Kari Lock Morgan and Dr. Fan Li Department of Statistical Science Duke University Frequency 0 2 4 6 8 Quiz 2 Histogram of Quiz2 10 12 14 16 18 20 Quiz2

More information

Experiments and Quasi-Experiments

Experiments and Quasi-Experiments Experiments and Quasi-Experiments (SW Chapter 13) Outline 1. Potential Outcomes, Causal Effects, and Idealized Experiments 2. Threats to Validity of Experiments 3. Application: The Tennessee STAR Experiment

More information

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data?

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data? When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data? Kosuke Imai Department of Politics Center for Statistics and Machine Learning Princeton University

More information

Lecture 1 January 18

Lecture 1 January 18 STAT 263/363: Experimental Design Winter 2016/17 Lecture 1 January 18 Lecturer: Art B. Owen Scribe: Julie Zhu Overview Experiments are powerful because you can conclude causality from the results. In most

More information

ESTIMATION OF TREATMENT EFFECTS VIA MATCHING

ESTIMATION OF TREATMENT EFFECTS VIA MATCHING ESTIMATION OF TREATMENT EFFECTS VIA MATCHING AAEC 56 INSTRUCTOR: KLAUS MOELTNER Textbooks: R scripts: Wooldridge (00), Ch.; Greene (0), Ch.9; Angrist and Pischke (00), Ch. 3 mod5s3 General Approach The

More information

Dynamics in Social Networks and Causality

Dynamics in Social Networks and Causality Web Science & Technologies University of Koblenz Landau, Germany Dynamics in Social Networks and Causality JProf. Dr. University Koblenz Landau GESIS Leibniz Institute for the Social Sciences Last Time:

More information

Research Note: A more powerful test statistic for reasoning about interference between units

Research Note: A more powerful test statistic for reasoning about interference between units Research Note: A more powerful test statistic for reasoning about interference between units Jake Bowers Mark Fredrickson Peter M. Aronow August 26, 2015 Abstract Bowers, Fredrickson and Panagopoulos (2012)

More information

Front-Door Adjustment

Front-Door Adjustment Front-Door Adjustment Ethan Fosse Princeton University Fall 2016 Ethan Fosse Princeton University Front-Door Adjustment Fall 2016 1 / 38 1 Preliminaries 2 Examples of Mechanisms in Sociology 3 Bias Formulas

More information

Job Training Partnership Act (JTPA)

Job Training Partnership Act (JTPA) Causal inference Part I.b: randomized experiments, matching and regression (this lecture starts with other slides on randomized experiments) Frank Venmans Example of a randomized experiment: Job Training

More information

Weighting. Homework 2. Regression. Regression. Decisions Matching: Weighting (0) W i. (1) -å l i. )Y i. (1-W i 3/5/2014. (1) = Y i.

Weighting. Homework 2. Regression. Regression. Decisions Matching: Weighting (0) W i. (1) -å l i. )Y i. (1-W i 3/5/2014. (1) = Y i. Weighting Unconfounded Homework 2 Describe imbalance direction matters STA 320 Design and Analysis of Causal Studies Dr. Kari Lock Morgan and Dr. Fan Li Department of Statistical Science Duke University

More information

Estimating the Dynamic Effects of a Job Training Program with M. Program with Multiple Alternatives

Estimating the Dynamic Effects of a Job Training Program with M. Program with Multiple Alternatives Estimating the Dynamic Effects of a Job Training Program with Multiple Alternatives Kai Liu 1, Antonio Dalla-Zuanna 2 1 University of Cambridge 2 Norwegian School of Economics June 19, 2018 Introduction

More information

AP Statistics. Chapter 6 Scatterplots, Association, and Correlation

AP Statistics. Chapter 6 Scatterplots, Association, and Correlation AP Statistics Chapter 6 Scatterplots, Association, and Correlation Objectives: Scatterplots Association Outliers Response Variable Explanatory Variable Correlation Correlation Coefficient Lurking Variables

More information

Problem Set #5. Due: 1pm on Friday, Nov 16th

Problem Set #5. Due: 1pm on Friday, Nov 16th 1 Chris Piech CS 109 Problem Set #5 Due: 1pm on Friday, Nov 16th Problem Set #5 Nov 7 th, 2018 With problems by Mehran Sahami and Chris Piech For each problem, briefly explain/justify how you obtained

More information

Monday May 12, :00 to 1:30 AM

Monday May 12, :00 to 1:30 AM ASTRONOMY 108: Descriptive Astronomy Spring 2008 Instructor: Hugh Gallagher Office: Physical Science Building 130 Phone, Email: 436-3177, gallagha@oneonta.edu Office Hours: M 2:00-3:00 PM, Th 10:00-11:00

More information

PSC 504: Instrumental Variables

PSC 504: Instrumental Variables PSC 504: Instrumental Variables Matthew Blackwell 3/28/2013 Instrumental Variables and Structural Equation Modeling Setup e basic idea behind instrumental variables is that we have a treatment with unmeasured

More information

EXAMINATION: QUANTITATIVE EMPIRICAL METHODS. Yale University. Department of Political Science

EXAMINATION: QUANTITATIVE EMPIRICAL METHODS. Yale University. Department of Political Science EXAMINATION: QUANTITATIVE EMPIRICAL METHODS Yale University Department of Political Science January 2014 You have seven hours (and fifteen minutes) to complete the exam. You can use the points assigned

More information

LECTURE 15: SIMPLE LINEAR REGRESSION I

LECTURE 15: SIMPLE LINEAR REGRESSION I David Youngberg BSAD 20 Montgomery College LECTURE 5: SIMPLE LINEAR REGRESSION I I. From Correlation to Regression a. Recall last class when we discussed two basic types of correlation (positive and negative).

More information

EOS-310 Severe & Unusual Weather Spring, 2009 Associate Prof. Zafer Boybeyi 1/20/09

EOS-310 Severe & Unusual Weather Spring, 2009 Associate Prof. Zafer Boybeyi 1/20/09 EOS-310 Spring, 2009 Associate Prof. Zafer Boybeyi 1/20/09 1 Instructor and Contact information Associate Prof. Zafer Boybeyi Research I, Room 217 Mail Stop 6A2 Email: zboybeyi@gmu.edu Phone: (703) 993-1560

More information

Association studies and regression

Association studies and regression Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration

More information

Physics 321 Theoretical Mechanics I. University of Arizona Fall 2004 Prof. Erich W. Varnes

Physics 321 Theoretical Mechanics I. University of Arizona Fall 2004 Prof. Erich W. Varnes Physics 321 Theoretical Mechanics I University of Arizona Fall 2004 Prof. Erich W. Varnes Contacting me Administrative Matters I will hold office hours on Tuesday from 1-3 pm Room 420K in the PAS building

More information

Course Staff. Textbook

Course Staff. Textbook Course Staff CS311H: Discrete Mathematics Intro and Propositional Logic Instructor: Işıl Dillig Instructor: Prof. Işıl Dillig TAs: Jacob Van Geffen, Varun Adiga, Akshay Gupta Class meets every Monday,

More information

Stellar Astronomy 1401 Spring 2009

Stellar Astronomy 1401 Spring 2009 Stellar Astronomy 1401 Spring 2009 Instructor: Ron Wilhelm Office: Science Building Room 9 Contact information: Office Hours: 742-4707 or ron.wilhelm@ttu.edu MWF 10:00-11:00 PM T & Th 11:30-12:30 AM Or

More information

Methods of Mathematics

Methods of Mathematics Methods of Mathematics Kenneth A. Ribet UC Berkeley Math 10B April 19, 2016 There is a new version of the online textbook file Matrix_Algebra.pdf. The next breakfast will be two days from today, April

More information

CHEMISTRY 3A INTRODUCTION TO CHEMISTRY SPRING

CHEMISTRY 3A INTRODUCTION TO CHEMISTRY SPRING CHEMISTRY 3A INTRODUCTION TO CHEMISTRY SPRING ---- 2007 INSTRUCTOR: Dr. Phil Reedy Office: Cunningham 321 Telephone: 954-5671 email: preedy@deltacollege.edu WEBSITES: www.deltacollege.edu/emp/preedy www.preparatorychemistry.com

More information

Experimental Designs for Identifying Causal Mechanisms

Experimental Designs for Identifying Causal Mechanisms Experimental Designs for Identifying Causal Mechanisms Kosuke Imai Princeton University October 18, 2011 PSMG Meeting Kosuke Imai (Princeton) Causal Mechanisms October 18, 2011 1 / 25 Project Reference

More information

On the Use of Linear Fixed Effects Regression Models for Causal Inference

On the Use of Linear Fixed Effects Regression Models for Causal Inference On the Use of Linear Fixed Effects Regression Models for ausal Inference Kosuke Imai Department of Politics Princeton University Joint work with In Song Kim Atlantic ausal Inference onference Johns Hopkins

More information

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011 Math 31 Lesson Plan Day 5: Intro to Groups Elizabeth Gillaspy September 28, 2011 Supplies needed: Sign in sheet Goals for students: Students will: Improve the clarity of their proof-writing. Gain confidence

More information

Machine Learning (CS 567) Lecture 2

Machine Learning (CS 567) Lecture 2 Machine Learning (CS 567) Lecture 2 Time: T-Th 5:00pm - 6:20pm Location: GFS118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

Introduction to Causal Inference. Solutions to Quiz 4

Introduction to Causal Inference. Solutions to Quiz 4 Introduction to Causal Inference Solutions to Quiz 4 Teppei Yamamoto Tuesday, July 9 206 Instructions: Write your name in the space provided below before you begin. You have 20 minutes to complete the

More information

Gov 2002: 9. Differences in Differences

Gov 2002: 9. Differences in Differences Gov 2002: 9. Differences in Differences Matthew Blackwell October 30, 2015 1 / 40 1. Basic differences-in-differences model 2. Conditional DID 3. Standard error issues 4. Other DID approaches 2 / 40 Where

More information

Subject: Geography Scheme of Work: B1 to B6 Mastery tiles. Term: Autumn/Spring/Summer

Subject: Geography Scheme of Work: B1 to B6 Mastery tiles. Term: Autumn/Spring/Summer Subject: Geography Scheme of Work: B1 to B6 Mastery tiles Term: Autumn/Spring/Summer Topic / Unit(s) Overview / Context Introduction to geography. An introduction to geography including basic map skills

More information

Lecture 8. Roy Model, IV with essential heterogeneity, MTE

Lecture 8. Roy Model, IV with essential heterogeneity, MTE Lecture 8. Roy Model, IV with essential heterogeneity, MTE Economics 2123 George Washington University Instructor: Prof. Ben Williams Heterogeneity When we talk about heterogeneity, usually we mean heterogeneity

More information

Physics 6A Lab Experiment 6

Physics 6A Lab Experiment 6 Biceps Muscle Model Physics 6A Lab Experiment 6 Introduction This lab will begin with some warm-up exercises to familiarize yourself with the theory, as well as the experimental setup. Then you ll move

More information

Principles Underlying Evaluation Estimators

Principles Underlying Evaluation Estimators The Principles Underlying Evaluation Estimators James J. University of Chicago Econ 350, Winter 2019 The Basic Principles Underlying the Identification of the Main Econometric Evaluation Estimators Two

More information

Causal Mechanisms Short Course Part II:

Causal Mechanisms Short Course Part II: Causal Mechanisms Short Course Part II: Analyzing Mechanisms with Experimental and Observational Data Teppei Yamamoto Massachusetts Institute of Technology March 24, 2012 Frontiers in the Analysis of Causal

More information

Treatment Effects. Christopher Taber. September 6, Department of Economics University of Wisconsin-Madison

Treatment Effects. Christopher Taber. September 6, Department of Economics University of Wisconsin-Madison Treatment Effects Christopher Taber Department of Economics University of Wisconsin-Madison September 6, 2017 Notation First a word on notation I like to use i subscripts on random variables to be clear

More information

Propensity Score Methods for Causal Inference

Propensity Score Methods for Causal Inference John Pura BIOS790 October 2, 2015 Causal inference Philosophical problem, statistical solution Important in various disciplines (e.g. Koch s postulates, Bradford Hill criteria, Granger causality) Good

More information

Econ 2148, fall 2017 Instrumental variables I, origins and binary treatment case

Econ 2148, fall 2017 Instrumental variables I, origins and binary treatment case Econ 2148, fall 2017 Instrumental variables I, origins and binary treatment case Maximilian Kasy Department of Economics, Harvard University 1 / 40 Agenda instrumental variables part I Origins of instrumental

More information

review session gov 2000 gov 2000 () review session 1 / 38

review session gov 2000 gov 2000 () review session 1 / 38 review session gov 2000 gov 2000 () review session 1 / 38 Overview Random Variables and Probability Univariate Statistics Bivariate Statistics Multivariate Statistics Causal Inference gov 2000 () review

More information

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data?

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data? When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Panel Data? Kosuke Imai Department of Politics Center for Statistics and Machine Learning Princeton University Joint

More information

The cover page of the Encyclopedia of Health Economics (2014) Introduction to Econometric Application in Health Economics

The cover page of the Encyclopedia of Health Economics (2014) Introduction to Econometric Application in Health Economics PHPM110062 Teaching Demo The cover page of the Encyclopedia of Health Economics (2014) Introduction to Econometric Application in Health Economics Instructor: Mengcen Qian School of Public Health What

More information

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data?

When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data? When Should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data? Kosuke Imai Princeton University Asian Political Methodology Conference University of Sydney Joint

More information

THE DESIGN (VERSUS THE ANALYSIS) OF EVALUATIONS FROM OBSERVATIONAL STUDIES: PARALLELS WITH THE DESIGN OF RANDOMIZED EXPERIMENTS DONALD B.

THE DESIGN (VERSUS THE ANALYSIS) OF EVALUATIONS FROM OBSERVATIONAL STUDIES: PARALLELS WITH THE DESIGN OF RANDOMIZED EXPERIMENTS DONALD B. THE DESIGN (VERSUS THE ANALYSIS) OF EVALUATIONS FROM OBSERVATIONAL STUDIES: PARALLELS WITH THE DESIGN OF RANDOMIZED EXPERIMENTS DONALD B. RUBIN My perspective on inference for causal effects: In randomized

More information

Simple Regression Model. January 24, 2011

Simple Regression Model. January 24, 2011 Simple Regression Model January 24, 2011 Outline Descriptive Analysis Causal Estimation Forecasting Regression Model We are actually going to derive the linear regression model in 3 very different ways

More information

DEALING WITH MULTIVARIATE OUTCOMES IN STUDIES FOR CAUSAL EFFECTS

DEALING WITH MULTIVARIATE OUTCOMES IN STUDIES FOR CAUSAL EFFECTS DEALING WITH MULTIVARIATE OUTCOMES IN STUDIES FOR CAUSAL EFFECTS Donald B. Rubin Harvard University 1 Oxford Street, 7th Floor Cambridge, MA 02138 USA Tel: 617-495-5496; Fax: 617-496-8057 email: rubin@stat.harvard.edu

More information

Randomized Experiments

Randomized Experiments Randomized Experiments Teppei Yamamoto Keio University Introduction to Causal Inference Spring 2016 1 Introduction 2 Identification 3 Basic Inference 4 Covariate Adjustment 5 Threats to Validity 6 Advanced

More information

Instrumental Variables

Instrumental Variables Instrumental Variables Teppei Yamamoto Keio University Introduction to Causal Inference Spring 2016 Noncompliance in Randomized Experiments Often we cannot force subjects to take specific treatments Units

More information

MATH 341, Section 001 FALL 2014 Introduction to the Language and Practice of Mathematics

MATH 341, Section 001 FALL 2014 Introduction to the Language and Practice of Mathematics MATH 341, Section 001 FALL 2014 Introduction to the Language and Practice of Mathematics Class Meetings: MW 9:30-10:45 am in EMS E424A, September 3 to December 10 [Thanksgiving break November 26 30; final

More information

Introduction to Informatics

Introduction to Informatics Introduction to Informatics Lecture 7: Modeling the World Readings until now Lecture notes Posted online @ http://informatics.indiana.edu/rocha/i101 The Nature of Information Technology Modeling the World

More information

Chapter 19 Sir Migo Mendoza

Chapter 19 Sir Migo Mendoza The Linear Regression Chapter 19 Sir Migo Mendoza Linear Regression and the Line of Best Fit Lesson 19.1 Sir Migo Mendoza Question: Once we have a Linear Relationship, what can we do with it? Something

More information

Assessing Studies Based on Multiple Regression

Assessing Studies Based on Multiple Regression Assessing Studies Based on Multiple Regression Outline 1. Internal and External Validity 2. Threats to Internal Validity a. Omitted variable bias b. Functional form misspecification c. Errors-in-variables

More information

Chemistry 883 Computational Quantum Chemistry

Chemistry 883 Computational Quantum Chemistry Chemistry 883 Computational Quantum Chemistry Instructor Contact Information Professor Benjamin G. Levine levine@chemistry.msu.edu 215 Chemistry Building 517-353-1113 Office Hours Tuesday 9:00-11:00 am

More information

Economics 113. Simple Regression Assumptions. Simple Regression Derivation. Changing Units of Measurement. Nonlinear effects

Economics 113. Simple Regression Assumptions. Simple Regression Derivation. Changing Units of Measurement. Nonlinear effects Economics 113 Simple Regression Models Simple Regression Assumptions Simple Regression Derivation Changing Units of Measurement Nonlinear effects OLS and unbiased estimates Variance of the OLS estimates

More information

Gov 2002: 5. Matching

Gov 2002: 5. Matching Gov 2002: 5. Matching Matthew Blackwell October 1, 2015 Where are we? Where are we going? Discussed randomized experiments, started talking about observational data. Last week: no unmeasured confounders

More information

Lecture 1. ASTR 111 Section 002 Introductory Astronomy Solar System. Dr. Weigel. Outline. Course Overview Topics. Course Overview General Information

Lecture 1. ASTR 111 Section 002 Introductory Astronomy Solar System. Dr. Weigel. Outline. Course Overview Topics. Course Overview General Information Lecture 1 Course Overview Topics ASTR 111 Section 002 Introductory Astronomy Solar System Dr. Weigel Outline 1.1 Course Overview 1.2 Course Logistics and Syllabus 1.3 Angular Measurements 1.4 Accuracy

More information

II. MATCHMAKER, MATCHMAKER

II. MATCHMAKER, MATCHMAKER II. MATCHMAKER, MATCHMAKER Josh Angrist MIT 14.387 Fall 2014 Agenda Matching. What could be simpler? We look for causal effects by comparing treatment and control within subgroups where everything... or

More information

GOV 2001/ 1002/ Stat E-200 Section 1 Probability Review

GOV 2001/ 1002/ Stat E-200 Section 1 Probability Review GOV 2001/ 1002/ Stat E-200 Section 1 Probability Review Solé Prillaman Harvard University January 28, 2015 1 / 54 LOGISTICS Course Website: j.mp/g2001 lecture notes, videos, announcements Canvas: problem

More information

MATH 10 INTRODUCTORY STATISTICS

MATH 10 INTRODUCTORY STATISTICS MATH 10 INTRODUCTORY STATISTICS Tommy Khoo Your friendly neighbourhood graduate student. It is Time for Homework! ( ω `) First homework + data will be posted on the website, under the homework tab. And

More information

Week 2: Review of probability and statistics

Week 2: Review of probability and statistics Week 2: Review of probability and statistics Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ALL RIGHTS RESERVED

More information

PSC 504: Differences-in-differeces estimators

PSC 504: Differences-in-differeces estimators PSC 504: Differences-in-differeces estimators Matthew Blackwell 3/22/2013 Basic differences-in-differences model Setup e basic idea behind a differences-in-differences model (shorthand: diff-in-diff, DID,

More information

Statistical Inference, Populations and Samples

Statistical Inference, Populations and Samples Chapter 3 Statistical Inference, Populations and Samples Contents 3.1 Introduction................................... 2 3.2 What is statistical inference?.......................... 2 3.2.1 Examples of

More information