disc choice5.tex; April 11, ffl See: King - Unifying Political Methodology ffl See: King/Tomz/Wittenberg (1998, APSA Meeting). ffl See: Alvarez

Similar documents
Goals. PSCI6000 Maximum Likelihood Estimation Multiple Response Model 1. Multinomial Dependent Variable. Random Utility Model

POLI 7050 Spring 2008 February 27, 2008 Unordered Response Models I

Econometrics Lecture 5: Limited Dependent Variable Models: Logit and Probit

Goals. PSCI6000 Maximum Likelihood Estimation Multiple Response Model 2. Recap: MNL. Recap: MNL

Discrete Choice Models I

Consider Table 1 (Note connection to start-stop process).

h=1 exp (X : J h=1 Even the direction of the e ect is not determined by jk. A simpler interpretation of j is given by the odds-ratio

Advanced Quantitative Methods: limited dependent variables

Multiple regression: Categorical dependent variables

13.1 Categorical Data and the Multinomial Experiment

Limited Dependent Variable Models II

Item Response Theory for Conjoint Survey Experiments

Stat 642, Lecture notes for 04/12/05 96

Chapter 11. Regression with a Binary Dependent Variable

Conducting Multivariate Analyses of Social, Economic, and Political Data

POLI 7050 Spring 2008 March 5, 2008 Unordered Response Models II

Binary Choice Models Probit & Logit. = 0 with Pr = 0 = 1. decision-making purchase of durable consumer products unemployment

Binary Dependent Variable. Regression with a

Linear Regression With Special Variables

Outline. The binary choice model. The multinomial choice model. Extensions of the basic choice model

The Logit Model: Estimation, Testing and Interpretation

The Causal Inference Problem and the Rubin Causal Model

ECON 594: Lecture #6

Chapter 9 Regression with a Binary Dependent Variable. Multiple Choice. 1) The binary dependent variable model is an example of a

Lecture 6: Discrete Choice: Qualitative Response

Intro Prefs & Voting Electoral comp. Political Economics. Ludwig-Maximilians University Munich. Summer term / 37

ECON Introductory Econometrics. Lecture 11: Binary dependent variables

Generalized Models: Part 1

An Overview of Choice Models

2. We care about proportion for categorical variable, but average for numerical one.

Can a Pseudo Panel be a Substitute for a Genuine Panel?

Probabilistic Choice Models

Modeling Land Use Change Using an Eigenvector Spatial Filtering Model Specification for Discrete Response

Statistical Analysis of the Item Count Technique

Generalized Linear Models for Non-Normal Data

UNIVERSITY OF MARYLAND Department of Economics Economics 754 Topics in Political Economy Fall 2005 Allan Drazen. Exercise Set I

Ordered Response and Multinomial Logit Estimation

Introduction to Generalized Models

Discrete Choice Modeling

I. Multinomial Logit Suppose we only have individual specific covariates. Then we can model the response probability as

covariance between any two observations

Single-level Models for Binary Responses

The Multilevel Logit Model for Binary Dependent Variables Marco R. Steenbergen

Maximum Likelihood and. Limited Dependent Variable Models

ECONOMETRICS HONOR S EXAM REVIEW SESSION

Lecture 12: Application of Maximum Likelihood Estimation:Truncation, Censoring, and Corner Solutions

7/28/15. Review Homework. Overview. Lecture 6: Logistic Regression Analysis

Introduction to Linear Regression Analysis

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent

Statistical Analysis of List Experiments

Lecture 1. Behavioral Models Multinomial Logit: Power and limitations. Cinzia Cirillo

New Approaches to Discrete Choice and Time-Series Cross-Section Methodology for Political Research

Models for Heterogeneous Choices

Ch 7: Dummy (binary, indicator) variables

Data Analytics for Social Science

Discrete Response Multilevel Models for Repeated Measures: An Application to Voting Intentions Data

Comparing IRT with Other Models

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Female Wage Careers - A Bayesian Analysis Using Markov Chain Clustering

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j

Binary Logistic Regression

Logit Regression and Quantities of Interest

Introduction to Discrete Choice Models

Application of Latent Class with Random Effects Models to Longitudinal Data. Ken Beath Macquarie University

MULTINOMIAL LOGISTIC REGRESSION

Immigration attitudes (opposes immigration or supports it) it may seriously misestimate the magnitude of the effects of IVs

Models of Qualitative Binary Response

Model Estimation Example

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture!

Political Economy of Institutions and Development. Lectures 2 and 3: Static Voting Models

Econometric Modelling Prof. Rudra P. Pradhan Department of Management Indian Institute of Technology, Kharagpur

Probabilistic Choice Models

Lecture 5: Spatial probit models. James P. LeSage University of Toledo Department of Economics Toledo, OH

STAT Chapter 13: Categorical Data. Recall we have studied binomial data, in which each trial falls into one of 2 categories (success/failure).

Week 7: Binary Outcomes (Scott Long Chapter 3 Part 2)

Logistic Regression: Regression with a Binary Dependent Variable

Analysis of Categorical Data. Nick Jackson University of Southern California Department of Psychology 10/11/2013

SOS3003 Applied data analysis for social science Lecture note Erling Berge Department of sociology and political science NTNU.

Econometrics I Lecture 7: Dummy Variables

Logistic regression modeling the probability of success

Binary Dependent Variables

Treatment Effects with Normal Disturbances in sampleselection Package

Lecture-20: Discrete Choice Modeling-I

Lecture 14 More on structural estimation

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

Estimation of mixed generalized extreme value models

Applied Economics. Regression with a Binary Dependent Variable. Department of Economics Universidad Carlos III de Madrid

MS&E 226: Small Data

Machine Learning. Lecture 3: Logistic Regression. Feng Li.

Discrete Choice Modeling

The Generalized Roy Model and Treatment Effects

Review of Multinomial Distribution If n trials are performed: in each trial there are J > 2 possible outcomes (categories) Multicategory Logit Models

MS&E 226: Small Data

Choice Theory. Matthieu de Lapparent

Introduction to Econometrics

Applied Econometrics Lecture 1

ZERO INFLATED POISSON REGRESSION

Lecture 3.1 Basic Logistic LDA

An Activist Model of Democracy

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

Transcription:

disc choice5.tex; April 11, 2001 1 Lecture Notes on Discrete Choice Models Copyright, April 11, 2001 Jonathan Nagler 1 Topics 1. Review the Latent Varible Setup For Binary Choice ffl Logit ffl Likelihood for Logit ffl Probabilities 2. RUM: The Random Utility Model formulation of choice. ffl Flexible ffl A strong (or weak) model of behavior. 3. Multinomial Logit ffl Likelihood ffl Probabilities 4. Conditional Logit ffl a More General Systemic Component of Model ffl Probabilities are Calculated the same as MNL 5. Measuring The Effects of Changes in X: ffl Analagous to OLS

disc choice5.tex; April 11, 2001 2 ffl See: King - Unifying Political Methodology ffl See: King/Tomz/Wittenberg (1998, APSA Meeting). ffl See: Alvarez and Nagler 1995 ( Perot - 1992"; Table 4), first differences 6. Measuring the Effects of Changes in the Choices ffl See: Alvarez and Nagler 1998 ( Collide"; Table 5 - moving Labour Party). 7. Goodness of Fit ffl Percent correctly predicted in 2-choice case. ffl Baseline Prediction in 2-choice case. ffl Percent Correctly predicted in J-choice case: Baseline Prediction Classification Schemes 8. Independence of Irrelevant Alternatives (IIA) ffl What is IIA ffl IIA does not aggregate ffl McFadden test for IIA 9. Multinomial Probit (MNP) ffl Does not impose IIA. ffl Need to put restrictions on ±. ffl Estimation via: Gauss, Gaussx, Limdep ffl Beyond 5 choices?

disc choice5.tex; April 11, 2001 3 10. Some Equivalencies of Logit Models 11. Scobit ffl MNL can be used to recover reduced form CL estimates. ffl MNL is equivalent to Binary Logit (under IIA). ffl What if basic assumption of the shape of the response curve is wrong? 12. Heteroscedastic Probit 13. Selection Bias ffl Heckman ffl Dubin and Rivers

disc choice5.tex; April 11, 2001 4 2 Latent Variable Setup: Binary Probit/Logit y Λ = fi 0 x + ffl (1) y = 1 if y Λ > 0 (2) y = 0 if y Λ» 0 (3) Now assume F is the cumulative distribution for ffl. Prob(y = 1) = Prob(y Λ > 0) (4) = Prob(fi 0 x + ffl > 0) = Prob(ffl > fi 0 x) = 1 Prob(ffl < fi 0 x) = 1 F ( fi 0 x) (5) If F is symmetric about 0, Prob(y = 1) = 1 F ( fi 0 x) = F (fi 0 x) (6) If F is the logistic distribution this gives us logit, if F is the cumulative normal distribution we have probit. In either case, logit or probit would recover consistent estimates of the parameter fi. Logistic distribution looks like: F (x) = 1 1 + e x (7)

disc choice5.tex; April 11, 2001 5 So, if F is logistic: Prob(y = 1) = F (fi 0 x) = = 1 1 + e fi 0 x e fi 0 x 1 + e fi 0 x So, if we could estimate fi, then we could compute the quantity of interest (P ). We use maximum likelihood to compute fi: we want to find fi to maximize: Pr(Y j fi; X) In the simple case, we have two possibilities: y i = 1 or y i = 0. Pr(y 1 ;y 2 ; ::::; y N j fi) = Y yi=1 Pr(Y i = 1 j fi; X) Y Pr(Y i = 0 j fi; X) y i =0

disc choice5.tex; April 11, 2001 6 Wework with something similar to the above: the likelihood function (L). The following assumes F is symettric, and substitutes F for Pr(Y i = 1): L = Y yi=1 F (X i fi j fi) Y y i =0 (1 F (X i fi j fi)) = Y F (X i fi j fi) y i Y (1 F (Xi fi j fi)) 1 y i We always work with log(l) - or the log-likelihood function. LL = X y i ln(f (X i fi)) + X (1 y i )ln((1 F (X i fi))) So, we take the first derivatives of the above expression with respect to fi, set them equal to 0, and solve for ^fi.

disc choice5.tex; April 11, 2001 7 3 Random Utility Models (RUM) Assume the i th individual's utility of the j th choice is given as: U ij = V ij + ffl ij (8) where V ij is a systemic component of utility and ffl ij is a stochastic component of utility. Assume that the i th individual chooses choice j iff: U ij > U ik 8 k 6= j (9) Notice that ffl ij is subscripted by i and j. We have one disturbance per respondent per choice. Simple setup of V ij : V ij = fi j X ij (10) (11) This model is very flexible: it allows for more than 2 choices. The model is a weak or strong model of behavior.

disc choice5.tex; April 11, 2001 8 4 RUM Example 1: Multinomial Logit V ij = ψ j X i (12) V ij = ψ j0 + ψ j1 pid i + ψ j2 educ i + ψ j3 ideology i (13) V i1 = ψ 10 + ψ 11 pid i + ψ 12 educ i + ψ 13 ideology i V i2 = ψ 20 + ψ 21 pid i + ψ 22 educ i + ψ 23 ideology i V i3 = ψ 30 + ψ 31 pid i + ψ 32 educ i + ψ 33 ideology i So: U i1 = ψ 10 + ψ 11 pid i + ψ 12 educ i + ψ 13 ideology i + ffl i1 U i2 = ψ 20 + ψ 21 pid i + ψ 22 educ i + ψ 23 ideology i + ffl i2 U i3 = ψ 30 + ψ 31 pid i + ψ 32 educ i + ψ 33 ideology i + ffl i3 ffl are iid, Type I Extreme Value. P i1 = Pr[(U i1 > U i2 ) & (U i1 > U i3 )] = Pr[(V i1 + " i1 > V i2 + " i2 ) & (V i1 + " i1 > V i3 + " i3 )] = Pr[(" i2 " i1 < V i1 V i2 ) & (" i3 " i1 < V i1 V i3 )] Notice: ψ is indexed by j.

disc choice5.tex; April 11, 2001 9 Normalization of One Set of ψ's U i1 = ψ 1 A i + " i1 U i2 = ψ 2 A i + " i2 U i3 = ψ 3 A i + " i3 P i1 = Pr[(U i1 > U i2 ) & (U i1 > U i3 )] = Pr[(ψ 1 A i + " i1 > ψ 2 A i + " i2 ) & (ψ 1 A i + " i1 > ψ 3 A i + " i3 )] = Pr[(" i2 " i1 < (ψ 1 ψ 2 )A i ) & (" i3 " i1 < (ψ 1 ψ 3 )A i )] P i2 = Pr[(" i1 " i2 < (ψ 2 ψ 1 )A i ) & (" i3 " i2 < (ψ 2 ψ 3 )A i )] P i3 = Pr[(" i1 " i3 < (ψ 3 ψ 1 )A i ) & (" i2 " i3 < (ψ 3 ψ 2 )A i )] 3 Quantities: Ψ 1 Ψ 2 = X Ψ 1 Ψ 3 = Y Ψ2 Ψ 3 = Z But: Z = Y X

disc choice5.tex; April 11, 2001 10 Example: Ψ 1 = 7 Ψ 2 = 4 Ψ 3 = 0 Yields: Ψ 1 Ψ 2 = 3 Ψ 1 Ψ 3 = 7 Ψ2 Ψ 3 = 4 Same Result if: Ψ 1 = 24 Ψ 2 = 21 Ψ 3 = 17 Yields: Ψ 1 Ψ 2 = 3 Ψ 1 Ψ 3 = 7 Ψ2 Ψ 3 = 4

disc choice5.tex; April 11, 2001 11 4.1 Probabilities: We do not prove the following here; but it is true. P ij = e V ij PJ k=1 ev ik (14) Simple 2-Choice Case: Pr(Y i = 1) = = This looks like binary logit: F (x) = = = = e fi 0 1 X i e fi 0 1 X i e fi 0 1 X i + e fi 0 2 X i e fi 0 1 X i + 1 1 1 + e x 1 1 + 1 e x 1 e x + 1 e x e x e x + 1 (15) (16)

disc choice5.tex; April 11, 2001 12 Table 2: Multinomial Logit and Binomial Logit Estimates British Election - 1987(Alvarez and Nagler 1998) Conservative/Alliance Labour/Alliance MNL BL MNL BL Intercept -4.33* -4.40* 4.55* 5.26* (.74) (.76) (.81) (.86) Defense.14*.17* -.17* -.19* (.03) (.03) (.03) (.03) Phillips Curve.08*.10* -.03 -.05 (.02) (.03) (.03) (.03) Taxation.13*.14* -.06** -.08* (.03) (.03) (.03) (.04) National..16*.16* -.16* -.20* (.03) (.03) (.03) (.03) Redist..07*.06* -.08* -.09* (.02) (.02) (.03) (.03) Crime.08*.08*.02.02 (.03) (.03) (.02) (.02) Welfare.11*.12* -.11* -.10* (.02) (.02) (.03) (.03) South -.12 -.06 -.41* -.45* (.16) (.17) (.21) (.22) Midlands -.26 -.26 -.12 -.15 (.17) (.17) (.21) (.21) North -.03.03.66*.61* (.17) (.18) (.19) (.20) Wales -.40 -.41 1.41* 1.46* (.35) (.36) (.31) (.33) Scot -.36 -.42**.68*.61* (.25) (.26) (.25) (.26) Union -.50 -.49*.37*.35* (.16) (.16) (.16) (.17) Public Employee.04.03 -.05.03 (.15 (.15 (.16 (.16 Blue Collar.09.14.70*.80* (.15) (.16) (.17) (.17) Gender.29*.33*.04 -.03 (.14) (.14) (.15) (.16) Age.03.03 -.21* -.24* (.05) (.05) (.05) (.05) Homeowner.31**.26 -.55* -52* (.18) (.18) (.17) (.17) Income.07*.07* -.05 -.07* (.03) (.03) (.03) (.03) Education -.81* -.92* -.54 -.65** (.31) (.31) (.35) (.36) Inflation.28*.31* -.00.05 (.10) (.11) (.12) (.12) Taxes.02 -.04 -.11 -.15* (.06) (.07) (.07) (.07) Unempl..30*.30.04.08 (.06) (.06) (.07) (.08) Number of Observations 2131 1494 2131 1172 Log Likelihood -1500.8-734.71-1500.8-588.27 Standard Errors in parenthesis. Λ indicates significance at 95% level; ΛΛ indicates significance at 90% level.

disc choice5.tex; April 11, 2001 13 Consider just a 2-choice comparison: U i1 = fi 0 1 X i + ffl i1 (17) U i2 = fi 0 2 X i + ffl i2 (18) Pr(U i2 > U i1 ) = Pr(fi 0 1 X i + ffl i1 < fi 0 2 X i + ffl i2 ) = Pr(ffl i1 ffl i2 < (fi 0 2 fi 0 1 )X i) (19) Back to latent variable model: Pr(Y i = 1) = Pr(u i < fi 0 X i ) (20) So: u i ß ffl i1 ffl i2 fi ß fi 2 fi 1 (21) If we assume ffl ij are independent, identically distributed with Type 1 Extreme Value distribution, then ffl i1 ffl i2 is logistically distributed. F (ffl ij ) = exp(e ffl ij ) (22) Assume fi 2 = 0. This is just a normalization, we could assume fi 2 = 17. The only thing that matters is U i1 U i2.

disc choice5.tex; April 11, 2001 14 5 Conditional Logit U ij = ψ 0 j A i + fi 0 X ij + ffl ij (23) where: U ij = utility of the i th respondent for the j th alternative. A i = characteristics of the i th respondent. X ij = characteristics of the j th alternative relative to the i th respondent. ψ j = a vector of parameters relating the characteristics of a respondent to the respondent's utility for the j th alternative. fi = a vector of parameters relating the relationship between the respondent and the alternative (X ij ) to the respondent's utility for the alternative. ffl ij = random disturbance for the i th respondent for the j th alternative; iid, Type I Extreme Value. Notice: ψ j varies across choices. Both conditional logit and multinomial logit models assume that the disturbances, ffl ij, are independent across alternatives.

disc choice5.tex; April 11, 2001 15 6 RUM Example 2: Conditional Logit V ij = fi 1 X ij + ψ j A i V ij = fi 1 issuedist ij + ψ j0 + ψ j1 pid i + ψ j2 educ i V i1 = fi 1 issuedist i1 + ψ 10 + ψ 11 pid i + ψ 12 educ i V i2 = fi 1 issuedist i2 + ψ 20 + ψ 21 pid i + ψ 22 educ i V i3 = fi 1 issuedist i3 + ψ 30 + ψ 31 pid i + ψ 32 educ i So: U i1 = fi 1 issuedist i1 + ψ 10 + ψ 11 pid i + ψ 12 educ i + ffl i1 U i2 = fi 1 issuedist i2 + ψ 20 + ψ 21 pid i + ψ 22 educ i + ffl i2 U i3 = fi 1 issuedist i3 + ψ 30 + ψ 31 pid i + ψ 32 educ i + ffl i3

disc choice5.tex; April 11, 2001 16 Table 4 Conditional Logit Estimates British Election - 1987 Conservative/Alliance Labour/Alliance Defense a -.18* (.02) Phillips Curve -.11* (.02) Taxation -.16* (.02) National. -.18* (.02) Redist. -.08* (.02) Crime -.10* (.05) Welfare -.14* (.02) Intercept.82 2.53* (.69) (.75) South -.15 -.44* (.17) (.21) Midlands -.29**.19 (.17) (.20) North -.06.64* (.18) (.19 Wales.48 1.3* (.36) (.31) Scot -.41.69* (.25) (.25) Union -.50*.37* (.16) (.16) Public Employee.09 -.02 (.15) (.16) Blue Collar.11.70* (.15) (.16) Gender.28*.00 (.14) (.15) Age.02 -.22* (.05) (.05) Homeowner.37* -.54* (.18) (.16) Income.07* -.06 (.03) (.03) Education -.82* -.61** (.32) (.35) Inflation.28* -.03 (.10) (.11) Taxes.01 -.10 (.07) (.07) Unempl..28* (.01 (.06) (.07) N 2131 Log Likelihood -1477.6 a The seven issues represent distance absolute value from the respondent tothe mean of the party position. Standard Errors in parenthesis. Λ indicates significance at 95% level; ΛΛ indicates significance at 90% level.

disc choice5.tex; April 11, 2001 17 6.1 Probabilities We do not prove the following here; but it is true. P ij = e fi0 X ij + ψj 0 A i P J X ik + ψ 0 k=1 efi0 k A i = e V ij P J k=1 ev ik Same probabilities as MNL. Just another form of MNL.

disc choice5.tex; April 11, 2001 18 7 Goodness of Fit, Predicted Values, Classification There is no R 2. Pseudo-R 2 "! even worse than R 2. Compute ^P ij. ^P ij! ^Y i Classification Rule (binomial): ^Y i = 1 if ^P i > :5 Percent Correctly Predicted: correct prediction": ^Yi = 1 and Y i = 1 or, ^Y i = 0 and Y i = 0. PCP = 100(# of Correct Predictions) N PMC = Percent in Modal Category

disc choice5.tex; April 11, 2001 19 7.1 PRE [Proportional Reduction in Error]: PRE = PCP PMC 1 PMC Example 1: PCP =.85 PMC =.80 Example 2: PRE = PCP =.75 PMC =.50 :85 :80 1 :80 = :05 :20 = :25 PRE = :75 :50 1 :50 = :25 :50 = :50 Classification Rule (multinomial): If you require ^P ij > :5 for ^y i = j, then you may not classify some observations. So: ^y i = j if ^Pij > ^P ik 8k 6= j.

disc choice5.tex; April 11, 2001 20 7.2 Unevenly Distributed Data: If the data has a very skewed distribution (90% 0's; 10% 1's), you may never predict 1 as an outcome. Common in multinomial cases where one case may be relatively rare. A Useful Table (hypothetical numbers): Pred Observed Cons Labour All d Total d Cons 300 25 10 335 d Labour 5 250 65 320 d Alliance 15 10 100 125 Total 320 285 175 780 This table has all the information (except uncertainty). We can see that we do not predict votes for Alliance very well.

disc choice5.tex; April 11, 2001 21 7.3 cpcp (Herron - 1999) Problem: We treat ^P i = :51 and Y i = 1 the same as ^P i = :95 and Y i = 1. But, we should give more credit for the latter prediction: it is a `better' prediction. Solution: Expected PCP". epcp = 1 N 0 @ X Yi=1 ^P i + X (1 ^P i ) 1 A Y i =0 Example: Y i ^Pi 0.6 1.6 1.8 epcp = 1/3 (.4 +.6 +.8) =.6

disc choice5.tex; April 11, 2001 22 7.4 epcp - Multinomial epcp = 1 N NX JX ^Pij (y i == j) i=1 j=1 epcp = 1 3 0 @ X Yi=1 ^P i1 + X Yi=2 ( ^P i2 ) + X Yi=3( ^P i3 ) 1 A

disc choice5.tex; April 11, 2001 23 8 Computing Effects of Changes in Characteristics of a Respondent P ij = e fi0 X ij + ψj 0 A i P J X ik + ψ 0 k=1 efi0 k A i (24) Say: we want to know what happens if a i were to change to ~a i ; say a i were to increase by z units. [a i is a particular element of the vector A i.] 1. Compute : ~a i = a i + z 2. Compute: Vij ~ = fi 0 X ij + ψ ~ 0 j A i 3. Compute P ~ ij 4. Compute: Pij ~ ^P ij The last difference is the quantity of interest. We could also compute this for everyone in the sample, and then compute the mean of Pij ~ ^P ij ; and get the effect of all respondents changing their taste on characteristic a by z units.

disc choice5.tex; April 11, 2001 24 Table 4 (A/N 1995 - AJPS) Effects of Economics, Issues, and Anger in the 1992 Election Probability of Voting For: Bush Clinton Perot Personal Finances Better 0.42 0.31 0.27 Worse 0.35 0.35 0.29 Difference 0.07-0.05-0.02 National Economy Better 0.54 0.19 0.27 Worse 0.24 0.49 0.27 Difference 0.29-0.30 0.00 Voter Ideology a Near 0.46 0.39 0.31 Far 0.32 0.28 0.20 Difference 0.14 0.11 0.12 Minorities Assist. 0.30 0.48 0.22 No Assist. 0.46 0.20 0.33 Difference -0.17 0.27-0.11 Abortion Pro-Life 0.62 0.22 0.16 Pro-Choice 0.28 0.38 0.34 Difference 0.34-0.16-0.18 Term Limits For 0.39 0.33 0.28 Against 0.38 0.32 0.30 Difference 0.01 0.01-0.02 Note: Table entries are the predicted probabilities of a hypothetical individual voting for Clinton, Bush or Perot based on different values of the row-variable. a Probabilities for each of the candidates in the voter-ideology row are based on the ideological distance between the voter and the particular candidate.

disc choice5.tex; April 11, 2001 25 9 Computing Effects of Changes in Characteristics of an Alternative ^P ij = e ^fi 0 X ij + ^ψ 0 j A i PJ k=1 e ^fi 0 X ik + ^ψ 0 k A i (25) We want to alter A i, and recompute ^P ij. Say: X ij = (resp i choice j ) (26) We want to know what happens if the j th choices `moves' z units to the right. 1. Set: g choicej = choice j + z 2. Compute: ~ Xij = (resp i g choicej ) 3. Compute: Vij ~ = fi ~ 0 Xij + ψ 0 j A i 4. Compute P ~ ij 5. Compute: Pij ~ ^P ij The last difference is the quantity of interest.

disc choice5.tex; April 11, 2001 26 Table 5 (A/N 1998 - AJPS) Conditional Logit Estimates of Effect of Movement By the Labour Party +/- 1 Standard Deviation 2 - British Election - 1987 Conservatives Labour Alliance Baseline 45.2 29.5 25.3 Defense 1 ff 45.7 28.3 26.0 2 + 1 ff 44.7 30.6 24.8 2 Difference -1.0 2.3-1.3 Phillips 1 ff 45.2 29.7 25.2 2 + 1 ff 45.3 29.0 25.7 2 Difference 0.2-0.7 0.6 Taxation 1 ff 45.6 28.6 25.8 2 + 1 ff 45.1 29.4 25.5 2 Difference -0.5 0.8-0.3 Nationalization 1 ff 45.9 27.7 26.4 2 + 1 ff 44.6 30.8 24.6 2 Difference -1.3 3.1-1.8 All Issues 1 ff 47.1 24.9 28.0 2 + 1 ff 43.5 31.8 24.7 2 Difference -3.6 6.8-3.2 Note: Estimated impact of the Labour party moving from one half a standard deviation to the left of its mean perceived position to one half a standard deviation to the right of its mean perceived position on each of seven issues. Column entries are estimated aggregate vote-shares.