Course Notes: Statistical and Econometric Methods

Similar documents
Simultaneous Equation Models (Book Chapter 5)

Econometrics Summary Algebraic and Statistical Preliminaries

Lecture-19: Modeling Count Data II

Lecture-20: Discrete Choice Modeling-I

Econometric Analysis of Cross Section and Panel Data

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)

Econometrics Lecture 5: Limited Dependent Variable Models: Logit and Probit

Introduction to Regression Analysis. Dr. Devlina Chatterjee 11 th August, 2017

Lecture 1. Behavioral Models Multinomial Logit: Power and limitations. Cinzia Cirillo

Applied Econometrics (MSc.) Lecture 3 Instrumental Variables

FinQuiz Notes

Econ 510 B. Brown Spring 2014 Final Exam Answers

Introduction to Econometrics

Linear Regression. Junhui Qian. October 27, 2014

A Guide to Modern Econometric:

ECON The Simple Regression Model

1 Motivation for Instrumental Variable (IV) Regression

1/34 3/ Omission of a relevant variable(s) Y i = α 1 + α 2 X 1i + α 3 X 2i + u 2i

Basic econometrics. Tutorial 3. Dipl.Kfm. Johannes Metzler

Least Squares Estimation-Finite-Sample Properties

Economics 308: Econometrics Professor Moody

Review of Econometrics

Econometrics Honor s Exam Review Session. Spring 2012 Eunice Han

Non-linear panel data modeling

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Rockefeller College University at Albany

Econometrics. Week 8. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Unobserved Heterogeneity and the Statistical Analysis of Highway Accident Data. Fred Mannering University of South Florida

Applied Econometrics (QEM)

Lecture 3: Multiple Regression

statistical sense, from the distributions of the xs. The model may now be generalized to the case of k regressors:

3. Linear Regression With a Single Regressor

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

ACE 564 Spring Lecture 8. Violations of Basic Assumptions I: Multicollinearity and Non-Sample Information. by Professor Scott H.

Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

ECONOMETRICS HONOR S EXAM REVIEW SESSION

Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data

A Course in Applied Econometrics Lecture 18: Missing Data. Jeff Wooldridge IRP Lectures, UW Madison, August Linear model with IVs: y i x i u i,

Christopher Dougherty London School of Economics and Political Science

Homoskedasticity. Var (u X) = σ 2. (23)

Introduction to Eco n o m et rics

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Introduction to Econometrics. Heteroskedasticity

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

Multiple Regression Analysis. Part III. Multiple Regression Analysis

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model

Econometrics. 7) Endogeneity

WISE International Masters

1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE

LECTURE 10. Introduction to Econometrics. Multicollinearity & Heteroskedasticity

OSU Economics 444: Elementary Econometrics. Ch.10 Heteroskedasticity

An overview of applied econometrics

Ninth ARTNeT Capacity Building Workshop for Trade Research "Trade Flows and Trade Policy Analysis"

Applied Quantitative Methods II

Rewrap ECON November 18, () Rewrap ECON 4135 November 18, / 35

LECTURE 11. Introduction to Econometrics. Autocorrelation

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation

388 Index Differencing test ,232 Distributed lags , 147 arithmetic lag.

Lecture 8. Using the CLR Model. Relation between patent applications and R&D spending. Variables

1. The OLS Estimator. 1.1 Population model and notation

A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods. Jeff Wooldridge IRP Lectures, UW Madison, August 2008

Panel Data Exercises Manuel Arellano. Using panel data, a researcher considers the estimation of the following system:

2. Linear regression with multiple regressors

Lecture 4: Heteroskedasticity

Instrumental Variables, Simultaneous and Systems of Equations

Applied Econometrics Lecture 1

Mohammed. Research in Pharmacoepidemiology National School of Pharmacy, University of Otago

WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, Academic Year Exam Version: A

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

4.8 Instrumental Variables

Econometrics Problem Set 11

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1

Multiple Regression Analysis. Basic Estimation Techniques. Multiple Regression Analysis. Multiple Regression Analysis

The Multiple Regression Model Estimation

Linear Models in Econometrics

Intermediate Econometrics

Föreläsning /31

Multiple Regression Analysis

Regression Models - Introduction

Econometrics of Panel Data

Final Exam. Economics 835: Econometrics. Fall 2010

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

Discrete Choice Modeling

coefficients n 2 are the residuals obtained when we estimate the regression on y equals the (simple regression) estimated effect of the part of x 1

Econometric Methods. Prediction / Violation of A-Assumptions. Burcu Erdogan. Universität Trier WS 2011/2012

ECNS 561 Multiple Regression Analysis

Review of Statistics

1. Basic Model of Labor Supply

The regression model with one fixed regressor cont d

ECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47

Applied Microeconometrics (L5): Panel Data-Basics

The Simple Linear Regression Model

Unit 10: Simple Linear Regression and Correlation

Chapter 8 Heteroskedasticity

Lecture 6: Dynamic panel models 1

Final Exam - Solutions

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9.

Econometrics Master in Business and Quantitative Methods

Lectures 5 & 6: Hypothesis Testing

Transcription:

Course Notes: Statistical and Econometric Methods h(t) 4 2 0 h 1 (t) h (t) 2 h (t) 3 1 2 3 4 5 t h (t) 4 1.0 0.5 0 1 2 3 4 t F(t) h(t) S(t) f(t) sn + + + + + + + + + + + + + + + + + + + + + + + + - - - + + + - - line 1 + + + - - - - - - - - - - - - - - - - - - - - - line 2 - - - J J+1 2 E ( ξn i ) = (- 1) ( σ6ρi π ) ( 1 J) P j LN ( Pj) 1 Pj LN P i j i CV = - LN EXP X I β X I f In ( ) ( ) + ( ) i ( 1 λ) ( βi In) i ( ) ( ) β X I o In S = g X h X dx X E EXP Δ ( β X ) i i EXP ( β x ki ki) ( β i i ) ( β ki ki ) + ( β ki ki ) = 1 EXP Δ X EXP x EXP x P( i ) I xki In I In ( ) L β = i EXP ( ) y λ λ i i y! i i r yi mi P( y i) = λi yi! ( λi mi! ) mi = 0 1 α Γ((1 α) + y ) i 1 α λ i L( λi ) = i Γ(1 α) yi! (1 α) + λi (1 α) + λi y i ˆ T 1 ( ) 1 T 1 T β = X Ω X X Ω Y Ε ( εε ) σ 2 I V in 0 Vin = y in + = 0 Inc p Professor Fred Mannering Purdue University Spring 2007 n in

Table of Contents i Review of Statistical Methods (Estimators and their statistical properties) 1 Model Estimation 1 Properties of Estimators 1 Bias 1 Efficiency 2 Consistency 3 Other Asymptotic Properties 4 Least Squares and Maximum Likelihood Estimation 5 Properties of Least Squares Estimators 5 Maximum Likelihood Estimation 6 Specification Issues and Least Squares 9 Specification Error 9 Non-zero Disturbance Mean 10 Errors in Variables 10 Correlation Between Explanatory Variables and Disturbances 11 Selectivity Bias 11 Non-normality of Disturbances 12 Heteroskedasticity 12 Serial Correlation 12 Multicollinearity 13 Simultaneous Equation Models 13 Reduced Form and the Identification Problem 14 The Identification Problem 16 Order Condition 16 Simultaneous Equation Estimation 16 Single equation methods 17 System equation methods 18 A note on generalized least squares estimation 18 Hypothesis Testing and Diagnostics for Continuous Dependent Variable Models 20 Assessment of Estimates Coefficients 20 Overall Model Assessment 21 Count Data Models 24 Poisson Regression Model Goodness of Fit Measures 26 Truncated Poisson Regression Model 27 Negative Binomial Regression Model 28

Zero-Inflated Poisson and Negative Binomial Regression Models 29 ii Discrete Outcome Models (Models of Discrete Data) 32 Binary and Multinomial Probit Models 34 Multinomial Logit Model 36 Indirect Utility 42 Properties and Estimation of Multinomial Logit Models 43 Statistical Evaluation 45 Interpretation of Findings 47 Elasticity 47 Cross-elasticity 48 Marginal Rates of Substitution (MRS) 49 Specification Errors 49 Independence of Irrelevant Alternatives (IIA) property 49 Other Specification Errors 51 Endogeniety in Discrete Outcome Models 53 Data Sampling 53 Forecasting and Aggregation Bias 56 Transferability 58 The Nested Logit Model (Generalized Extreme Value Models) 59 Special Properties of Logit Models 63 Sub-sampling of alternate outcomes for model estimation 63 Compensating Variation 63 Models of Ordered Discrete Data 64 Discrete/Continuous Models 69 The Discrete/Continuous Modeling Problem 69 Econometric Corrections: Instrumental Variables and Expected Value Method 70 Econometric Corrections: Selectivity-Bias Correction Term 72 Discrete/continuous Model Structures 74 Reduced form approach 74 Economic consistency approach 76 Duration Models 78 Hazard-Based Duration Models 78 Proportional hazards 81 Accelerated Lifetime 82 Characteristics of Duration Data 82 Tied Data 84

Non-Parametric Models 84 Semi-Parametric Models 84 Fully-Parametric Models 85 Exponential 85 Weibull 86 Log-logistic 86 Comparisons of Non-Parametric, Semi-Parametric, and Fully-Parametric Models 87 State Dependence 90 Time-Varying Covariates 91 Discrete-Time Hazard Models 91 Course Assignments 93 iii

1 Review of Statistical Methods Estimators and their statistical properties Model Estimation Consider a model of household vehicle miles of travel Household Miles Driven Over Some Time Period: y t = β 0 + β 1 x t + ε t where: y t Dependent Variable, x t β 0, β 1 ε t Independent Variable Estimable Parameters Disturbance or Error Term Estimation problem is one of finding values for β 0 and β 1. Properties of Estimators Bias Classes of properties: Small sample - Hold for any size sample Asymptotic - Hold only as the limit of n - Desirable to have the estimator distribution have a mean value equal to the true parameter. - Define unbiasedness as Ε ( ˆ β ) = β - For small sample unbiasedness: ( ) Ε ˆ β = β n - For asymptotic unbiasedness: lim ( ˆ ) n Ε β = β

2 In general, bias is defined as: Bias = Ε ( ˆ β) β bias E(beta hat)=beta E(beta hat) ne beta Illustration of biased estimators. Efficiency Efficiency is a small sample property. One estimator is more efficient than another if it has smaller variance: (Both estimators must be unbiased). 1 2 e.g., ˆβ is more efficient than ˆ β if VAR ( ˆ β 1 ) < VAR( ˆ β 2 ) The best unbiased estimator is the most efficient among all unbiased estimators.

3 The most efficient estimator is defined as having a smaller variance than any other unbiased estimator. X ~ N( μ, σ 2 /n) X 1 ~ N( μ, σ 2 ) μ x ( ) = μx E X Illustration of efficient estimators. Identification of best unbiased (most efficient) estimator is achieved by the Cramer- Rao theorem. Under a number of assumptions, it can be shown that for all estimators: VAR L Ε 2 β ( ˆ 1 β ) 2 Can prove most efficient estimator if VAR ( ˆ β ) is equal to the Cramer-Rao bound. Consistency Consistency is an asymptotic property. Definition: A consistent estimator has a distribution which collapses on the true parameter value as the sample size increases.

4 ˆβ converges to β in the probability limit if for any lim Prob β β n ( ˆ ) = 0. Also, plim VAR ( ˆ β ) = 0 n f (X*) for n 1 Probability density f (X*) for n 2 < n 1 f (X*) for n 3 < n 2 f (X*) for n 4 < n 3 μ x Note: Consistent estimators can be biased and inefficient; therefore, consistency is not a strong property. Other Asymptotic Properties Desire to show that estimator's distribution can be approximated better and better as sample size increases. 1. Asymptotically Normal - The estimator's distribution converges to a normal distribution. 2. Asymptotic Efficiency - ˆ β n is asymptotically efficient if: β ˆ n is consistent ˆ β n asymptotic variance is smaller than the asymptotic variance of all other consistent estimators

5 Models with Continuous Dependent Variables Least Squares and Maximum Likelihood Estimation Least Squares Estimation The object of least squares is to fit an equation that minimizes the squared differences between equation predicted values and observed values (i.e., data). the objective function is: M in ( Y Y ) 2 i ˆ i where: Y i - Actual observations, Ŷ i - Refers to fitted values The term Y i Yˆ i is referred to as the residual and is denoted as ε i. For the case Y i = β 0 + β 1x i it can be shown that: 2 n xi ( xi) n x Y x Y β = = ( x i x )( Yi Y ) ( xi x ) i i i i 1 2 2 β = Y bˆ x and 0 1 For the case of many independent variables, least squares estimation can be represented in 1 β = X X X Y matrix form as ( ) ( ) where: ˆ β ˆ β ˆ β ˆ 1 2 = β 3 ˆ β K K = number of independent variables (x's); ' = indicates transposed matrix; -1= indicates matrix inversion; X = matrix of independent variables - N K; Y = vector of dependent variable - N 1 Properties of Least Squares Estimators Under very general assumptions, the Gauss-Markov theorem demonstrates the least squares estimators (OLS - Ordinary Least Squares) are BLUE.

6 BLUE - Best Linear Unbiased Estimator Implies: Unbiased, Efficient Assumptions required to prove OLS is BLUE: A1. Normality:The disturbance term ε i is normally distributed. A2. Zero Mean: E( ε i ) = 0 A3. Homoskedasticity: Disturbance terms have the same variance. ( i ) E ε = σ 2 2 A4. Serial Independence: Disturbance terms are not correlated. ( i j) 0 E εε = i j A5. Non Stochastic X: Is not random and has fixed values in repeated samples. Maximum Likelihood Estimation Principle: Different statistical populations generate different samples; any one sample is more likely to come from some populations rather than others. Example: If we have a sample of Y 1, Y 2,..., Y n, we want to find the value of ˆ β most likely to generate this sample.

7 Y1 Y6 Y3 Y5 Y2 Y4 Consider the simple model, Y i = β 0 + β 1x i + ε i Assume (as in OLS) that Y i is normally distributed with mean β 0 + β1xi and variance σ 2, therefore, the probability distribution can be written as: 1 1 P Y EXP Y x 2πσ 2σ ( ) = ( β β ) 2 i 2 2 i 0 1 i The likelihood function is: N 2 1 1 ( ) ( ) ( ) ( ) ( ) 2 LY,Y, 1 2,Y, N β0, β1, σ = P Y1 P Y2 P YN = EXP Y 2 2 i β0 β1xi i= 1 2πσ 2σ where Π is the product of N factors.

8 For simplicity, work is done with the logarithm of L rather than L itself. This is acceptable since L is always non-negative and the logarithmic function is monotonic (preserves ordering). Maximizing LN( L), LLwith respect to 2 β β σ gives: 0, 1, ( LL) 1 = = 2 β 0 σ ( LL) 2 β1 σ ( Y β β x ) i 0 1 1 = xi ( Yi β 0 β 1xi) = i 0 0 ( LL) N 1 = + = 2 2 4 σ 2σ 2σ ( Y β β x ) i 0 1 i 0 Solving these equations gives: β = Y β x and 0 1 β = ( xi x)( Yi Y ) ( xi x) 1 2 which is equivalent to OLS estimators. However, in general, MLE's are not necessarily BLUE. Properties of MLE's (Maximum Likelihood Estimators) 1) They are consistent. 2) They are asymptotically normal. 3) They are asymptotically efficient (i.e., asymptotic variance = Cramer-Rao Bound) Note: Maximum Likelihood Estimators are not generally unbiased or efficient.

9 Specification Issues and Least Squares A. Specification Error refers to errors resulting from a misspecified model (i.e., functional form). 1) Omitted Variables y = β x + β x + ε Suppose the true model is: 1 1 2 2 * * i i i and we estimate: 2 2 y = β x + ε i i i i It can be shown by substitution that ˆ β = β + β * 2 2 1 ( 2 1) ( x ) COV x,x VAR 1 Because there is no guarantee that the second term is equal to zero, the estimation of ˆ * β in the misspecified equation will be biased. 2 Because this bias does not dissappear and n inconsistent as well., the parameter will be However, if COV ( x 2,x 1) = 0 (i.e., x 2 and x are not correlated) then 1 estimators will be BLUE except intercept. 2) Presence of Irrelevant Variables Suppose the true model is y = β 2 x 2 + ε i i i and we estimate The irrelevant variable x 3 implies we are not accounting for the parameter restriction y = β x + β x + ε * * * i 2 2 i 3 3i i * where β 3 = 0. In general, not accounting for all available information leads to loss of efficiency; but no loss of consistency or bias. So, ˆ * 2 * β is unbiased and consistent. E ( ˆ β ) = β 2 2

10 * but it is not efficient since VAR ( ˆ β2) > VAR ( ˆ β2) exception is when COV(x 2, x 3 ) = 0 when estimators are again BLUE except intercept. 3) Nonlinearities Suppose the true model is y = β x + β x + β x + ε 2 3 i 2 2i 3 2i 4 2i i * * and we estimate yi = β 2x2i + ε i This results in the same consequences as omitted variables (i.e., biased and inconsistent parameter estimates). B. Non-zero Disturbance Mean (violation of assumption A2) i.e., E(ε i ) 0 Cause: Can result from consistent positive or negative errors of measurement in Y. If an intercept (β 0 ) is excluded, the parameter estimates will be biased and inconsistent. If and intercept is included, it will be a biased estimate of the true intercept, but all other parameters will be BLUE. C. Errors in Variables (violation of assumption A5) If we have y i = β x i + ε i and: 1) y i is measured with error, i.e., we use y* = y i + μ i (μ i is error) If COV(μ i, x i ) = 0 then β is unbiased and consistent If COV(μ i, x i ) 0 then bias and consistent

11 2) x i is measured with error Then parameters will be bias and inconsistent 3) y i and x i measured with error Then parameters will be bias and inconsistent D. Correlation Between Explanatory Variables and Disturbances (violation of assumption A5) Implies x does not have fixed values in repeated samples (A5) If x and ε i are correlated, ˆβ will be a biased and inconsistent estimator of β. This correlation problem is the same problem that results from endogenous variables, and leads to simultaneous equation estimation techniques. E. Selectivity Bias Evolves when the available data sample is not representative of the entire population, and the reason for this is based on some selection process. For example: Estimating a VMT equation for new cars will be biased since households buy new cars since they drive more (i.e., we do not know how much people owning used cars would drive if they had new cars). + s n line 1 line 2 + + - - - - - - - + + + + + + + + + + + + + + + - + - + + - + - - - - - - - - - - - - - - - + + + - - + + + - + + + - β f X n Results in biased parameter estimates.

12 F. Non-normality of Disturbances (violation of assumption A1) Causes: 1. Measurement errors 2. Unobserved parameter variations Results in hypothesis testing problems (i.e., hypothesis testing depends crucially on the normality assumption). With failure of normality, OLS is inefficient but still consistent. Diagnostics: 1. Specification tests 2. Plot residuals and see if they are normal G. Heteroskedasticity (violation of assumption A3) Results when the disturbance term variables have variances that are not equal 2 2 2 ( ε1) ( ε2) ( εn) E E E Causes: 1. Unequally sized observation units 2. Aggregation Heteroskedasticity results in OLS estimates that are unbiased and consistent but not efficient. Diagnostics: 1. Plot of squared residuals versus independent variable 2. Split sample regressions H. Serial Correlation (violation of assumption A4) Results when E(ε i ε j ) 0 i j Causes: 1. Persistent disturbances 2. Omitted smoothly changing variables 3. Time averaged data Serial correlation results in OLS estimators that are generally unbiased and consistent but not efficient. If lagged dependent variables are in a model that has serial correlation, the problems are much more severe. Diagnostic: 1. Durbin-Watson statistic

13 I. Multicollinearity Results when independent variables are highly correlated. Cause: 1. Lack of variation among data OLS estimators in the presence of multicollinearity remain BLUE However, the standard errors of the estimated coefficients can be quite large Diagnostic: 1. Condition number of X'X Simultaneous Equation Models Interrelated equations with continuous dependent variables: Utilization of individual vehicles (measured in kilometers driven) in multivehicle households Interrelation between travel time from home to an activity and the duration of the activity Interrelation of average vehicle speeds by lane with the vehicle speeds in adjacent lanes. Problem: Estimation of equation systems by the ordinary least squares (OLS) violates a key OLS assumption in that a correlation between regressors and distrubances will be present because not all independent variables are fixed in random samples (violation of A5). Overview of the simultaneous equations problem Consider annual vehicle utilization equations (one for each vehicle) in two-vehicle households of the following linear form:

14 u = β Z + α X + λu + ε 1 1 1 1 1 2 1 u = β Z + α X + λ u + ε 2 2 2 2 2 1 2 Where: u 1 is the kilometers per year that vehicle 1 is driven, u 2 is the kilometers per year that vehicle 2 is driven, Z 1 and Z 2 are vectors of vehicle attributes (for vehicles 1 and 2 respectively), X is a vector of household characteristics, β's, α's, are vectors of estimable parameters, λ's are estimable scalars, and ε's are disturbance terms. To satisfy regression assumption A5, the value of the dependent variable (left-hand side variable) must not influence the value of an independent variable (right-hand side). This is not the case in these equations because in the first equation the independent variable u 2 varies as the dependent variable u 1 varies, and in the second equation, the independent variable u 1 varies as the dependent variable u 2 varies. Thus, u 2 and u 1 are said to be endogenous variables in Equations 5.1 and 5.2 respectively. Reduced Form and the Identification Problem Reduced form solution: solving two equations and two unknowns to arrive at reduced forms. Substituting second equation into the first in the previous example: [ ] u = β Z + α X + λ β Z + α X + λu + ε + ε 1 1 1 1 1 2 2 2 2 1 2 1 rearranging,

15 β α + λ α λ β λ ε + ε u = Z + X + Z + 1 1 1 2 1 2 1 2 1 1 1 2 1 λ1λ2 1 λ1λ2 1 λ1λ2 1 λ1λ2 and similarly substituting first equation for u 1 in the second equation gives, β α + λ α λ β λ ε + ε u = Z + X + Z + 2 2 2 1 2 1 2 1 2 2 2 1 1 λ 2λ1 1 λ 2λ1 1 λ 2λ1 1 λ 2λ1 Because the endogenous variables u 1 and u 2 are replaced by their exogenous determinants, the equations cand be estimated using ordinary least squares (OLS) as, where, u1 = a1z1 + b1x + c1z2 +ξ 1, and u2 = a2z 2 + b2x + c2z1 +ξ 2, β α a ; b + λ α λ β λ ε = = ; c = ; = + ε ξ 1 1 1 2 1 2 1 2 1 1 1 1 1 1 λ 1λ 2 1 λ1λ 2 1 λ1λ 2 1 λ1λ 2 β α + λ α λ β λ ε + ε a = ; b = ; c = ; ξ = 2 2 2 1 2 1 2 1 2 2 2 2 1 1 λ 2λ1 1 λ 2λ1 1 λ 2λ1 1 λ 2λ1 OLS estimation of these reduced form models (Equations 5.6 and 5.7) is called indirect least squares (ILS). Problem: While estimated reduced form models are readily used for forecasting purposes, if inferences are to be drawn from the model system, the underlying parameters need to be determined.. Unfortunately, uncovering the underlying parameters, (the β's, α's, and λ's) in reduced form models is problematic because either too little or too much information is often available. For example, note that above equations provide two possible solutions for β 1, ( ) β λλ β ( λ λ ) c 1 2 2 1 1 = a1 1 1 2 and 1 =. λ2

16 The Identification Problem In some instances, it may be impossible to determine the underlying parameters. In these cases, the modeling system is said to be unidentified. In cases where exactly one equation solves the underlying parameters, the model system is said to be exactly identified. When more than one equation solves the underlying parameters (as shown in Equation 5.10), the model system is said to be over identified. Order Condition Determines an equation to be identified if the number of all variables excluded from an equation in an equation system is greater than or equal to the number of endogenous variables in the equation system minus one. For example, in the first equation in the original equation system above, the number of elements in the vector Z 2, which is an exogenous vector excluded from the equation, must be greater than or equal to one because there are two endogenous variables in the equation system (u 1 and u 2 ). Simultaneous Equation Estimation 1) Two modeling alternatives: single-equations estimation methods and systems estimation methods. 2) The distinction between the two is that systems methods consider all of the parameter restrictions (caused by over identification) in the entire equation system and account for possible contemporaneous (cross-equation) correlation of disturbance terms. 3) Because system estimation approaches are able to utilize more information (parameter restrictions and contemporaneous correlation), they produce variancecovariance matrices that are at worst equal to, and in most cases smaller than those

17 produced by single-equation methods (resulting in lower standard errors and higher t-statistics for estimated model parameters). Single equation methods 1) Indirect least squares (ILS) Applies ordinary least squares to the reduced form models. Consistent but not unbiased 2) Instrumental variables (IV) 1) Uses an instrument (a variable that is highly correlated with the endogenous variable it replaces, but is not correlated to the disturbance term) to estimate individual equations 2) Consistent but not unbiased. 3) Two-stage least squares (2SLS) Approach finds the best instrument for endogenous variables. Stage 1 regresses each endogenous variable on all exogenous variables. Stage 2 uses regression-estimated values from stage 1 as instruments, and estimates equations with ordinary least squares. Consistent but not unbiased. Generally better small sample properties than ILS or IV. 4) Limited Information Maximum Likelihood (LIML) Uses maximum likelihood to estimate reduced form models. Can incorporate parameter restrictions in over identified equations. Consistent but not unbiased. Has same asymptotic variance-covariance matrix as 2SLS.

18 System equation methods 1) Three Stage Least Squares (3SLS) Stage 1 gets 2SLS estimates of the model system. Stage 2 uses the 2SLS estimates to compute residuals to determine cross-equation correlations. Stage 3 uses generalized least squares (GLS) to estimate model parameters. Consistent and more efficient than single-equation estimation methods. 2) Full Information Maximum Likelihood (FIML) Similar to LIML but accounts for contemporaneous correlation of disturbances in the likelihood function. Consistent and more efficient than single-equation estimation methods. Has same asymptotic variance-covariance matrix as 3SLS. A note on generalized least squares estimation Ordinary least squares (OLS) assumptions are that disturbance terms have equal variances and are not correlated. Generalized least squares (GLS) is used to relax these OLS assumptions. Under OLS assumptions, in matrix notation, where: T 2 ( ) Ε εε = σ I E(.) denotes expected value, ε is an n 1 column vector of equation disturbance terms (where n is the total number of observations in the data), T ε is the 1 n transpose of ε, σ 2 is the disturbance term variance, and I is the n n identity matrix,

19 T When heteroskedasticity is present, ( ) 1 0. 0 0 1. 0 I =..... 0 0. 1 Ε εε = Ω, where Ω is n n matrix, Ω 2 σ 1 0. 0 2 0 σ 2. 0 =.... 2 0 0. σ n. Ε εε T 2 For disturbance-term correlation, ( ) = σ Ω, where N 1 1 ρ. ρ N 2 ρ 1. ρ Ω =.... N 1 N 2 ρ ρ. 1 Recall that in ordinary least squares, parameters are estimated from, where: ˆ β T ( ) 1 ˆ X X X T = Y β, is an p 1 column vector (where p is the number of parameters), X is an n p matrix of data, T X is the transpose of X, and Y is an n 1 column vector. Using Ω, Equation 5A.5 is rewritten as, T ( ) 1 ˆ 1 T 1 = X X X Y β Ω Ω.

20 The most difficult aspect of GLS estimation is obtaining an estimate of the Ω matrix. In 3SLS, it is estimated using the initial 2SLS parameter estimates. Hypothesis Testing and Diagnostics for Continuous Dependent Variable Models The objective of hypothesis testing and diagnostics is to determine the "best" model fit to a specified data set. A. Assessment of Estimates Coefficients The most commonly used statistic used to evaluate coefficients is the t-statistic. The t- statistic is defined as: ˆ β β t = DF where: t DF the t-stat with DF (degrees of freedom) (N-K) = DF (N minus the S ˆ β number of coefficients in the model) ˆβ the estimated parameter β value of parameter testing against (usually zero) S β standard error of ˆβ (i.e., square root of VAR( ˆβ )) ˆ Example: Suppose we estimate the model Y = A + Bx, with 30 observations, and find * t-stat is calculated with β = 0 Coeff. Value Standard Error t-stat A 2.47 1.92 1.29 B -3.13 1.33 2.35 Wish to test whether A and B are significantly different from zero. For both DF = N-2 = 28 Wish to test that A > 0 and B < 0 use a one-tailed t-test. From tables we find the critical values for: t 0.90, 28 = 1.313 90% confidence level t 0.99, 28 = 2.467 99% confidence level

21 The hypotheses are: H O : A, B = 0 H A : A > 0, B < 0 For A, 1.29 < 1.313 so we can only be about 89% confident that A > 0 For B -2.35 < -2.467 s0 we can be about 90.5% confident that B < 0 If we want to test A 0 and B 0 we use a two-tailed test: From tables we find the critical values for: t 0.90, 28 = 1.701 at 90% confidence level t 0.99, 28 = 2.763 at 99% confidence level The hypotheses are: H O : A, B = 0 H A : A > 0, B < 0 We will be less confident since critical t-values are larger for the two-tailed test. B. Overall Model Assessment 1) R-Squared The most commonly used statistic is the R-squared. R-squared is the ratio of data variance explained by the model to total data variance. R 2 ( i ) ( Yi Y ) 2 Ŷ Y explained = = 2 variation in Y

22 ( i ) 2 ê SSR Residual Variation or = 1 = 1 2 Y Y Total Variation in Y Generally, the higher the R-squared value, the better. However, it is important to consider: a) The Amount of Variance in the Data Data with little variance may produce high R 2 's, but the model is not explaining much. Conversely, data with much variance may produce low R 2 's, but may still be explaining much of the underlying process. As a rule: It may be better to explain a little of a lot of variance rather than a lot of a little variance. b) The Number of Independent Variables in the Model The R 2 statistic will always increase as more variables are added. To resolve this problem, the corrected R-squared statistic is used: 2 2 N 1 ( ) R = 1 1 R N K where: N = number of observations K = number of parameters in the model 2 The corrected R accounts for the number of variables in the model and therefore can decline when additional variables are added.

23 2) F-Statistic The F-statistic is used to test whether the model is significantly different from zero (i.e., if a relation exists or not). The F-statistic tests the joint hypothesis that all parameters are equal to zero For finding critical values of F (i.e., from tables), the degrees of freedom are K 1 N K where: N = number of observations K = number of parameters in the model Generally, if t-stats and R 2 's are good, F-stat will be OK. 3) Durbin-Watson Statistic This statistic is used to test for the presence of serial correlation (auto correlation) of disturbances. The further away the statistic is from 2.0, the less confident we can be about the absence of serial correlation. 4) Condition Number Is used to determine the extent of multicollinearity. It is derived from the characteristic roots of the X'X matrix. Condition number = Largest Characteristic Root Smallest Characteristic Root

24 CN < 10 No multicollinearity 10 < CN < 100 Some Problems CN > 100 Serious multicollinearity Count Data Models Count data consist of non-negative integer values Examples: number of driver route changes per week, the number of trip departure changes per week, drivers' frequency-of-use of ITS technologies over some time period, the number of accidents observed on road segments per year. Count data can be properly modeled by using a number of methods, the most popular of which are Poisson and negative binomial regression models. Poisson Regression Model Consider the number of accidents occurring per year at various intersections in a city. In a Poisson regression model, the probability of intersection i having y i accidents per year (where y i is a non-negative integer) is given by: ( ) P y i = EXP ( ) λ λ y! i i y i i Where: P(y i ) is the probability of intersection i having y i accidents per year λ i is the Poisson parameter for intersection i, which is equal to intersection i's expected number of accidents per year, E[y i ].

25 Poisson regression models are estimated by specifying the Poisson parameter λ i (the expected number of events per period) as a function of explanatory variables. The most common relationship between explanatory variables and the Poisson parameter is the log-linear model, ( ) ( λ ) λ = EXP βx or, equivalently LN = βx, i i i i Where: X i is a vector of explanatory variables and β is a vector of estimable coefficients. In this formulation, the expected number of events per period is given by [ ] = λ = ( β ) E y EXP X i i i For model estimation, note the likelihood function is: ( ) P( y ) L β = i i So, with the Poisson equation, ( ) L β = i EXP ( ) λ λ y i! i y i i Since λ EXP ( βx ) i =, i ( ) L β = i ( ) ( ) EXP -EXP βx i EXP βx i y! i y i Which gives the log-likelihood,

26 n i i i i. i= 1 ( ) = ( ) + β (!) LL β EXP βx y X LN y Poisson Regression Model Goodness of Fit Measures The likelihood ratio test is a common test used to assess two competing models. It provides evidence in support of one model The likelihood ratio test statistic is, -2[LL(β R ) LL (β U )] where LL(β R ) is the log-likelihood at convergence of the "restricted" model (sometimes considered to have all coefficients in β equal to 0, or just to include the constant term, to test overall fit of the model) LL(β U ) is the log-likelihood at convergence of the unrestricted model. This statistic is χ 2 distributed with the degrees of freedom equal to the difference in the numbers of coefficients in the restricted an unrestricted model (the difference in the number of coefficients in the β R and the β U coefficient vectors). Another measure of overall model fit is the ρ 2 statistic. The ρ 2 statistic is, 2 ρ = 1 LL LL ( β ) ( 0) Where: LL(β) is the log-likelihood at convergence with coefficient vector β and LL(0) is the initial log-likelihood (with all coefficients set to zero). The perfect model would have a likelihood function equal to one (all selected alternative outcomes would be predicted by the model with probability one, and the product of these

27 across the observations would also be one) and the log-likelihood would be zero giving a ρ 2 of one The ρ 2 statistic will be between zero and one and the closer it is to one, the more variance the estimated model is explaining. Truncated Poisson Regression Model Truncation of data can occur in the routine collection of transportation data. Example, if the number of times per week an in-vehicle navigation system is used on the morning commute to work, during weekdays, the data are right truncated at 5, which is the maximum number of uses in any given week. Estimating a Poisson regression model without accounting for this truncation will result in biased estimates of the parameter vector β, and erroneous inferences will be drawn. Fortunately, the Poisson model is adapted easily to account for such truncation. The righttruncated Poisson model is written as: r yi mi P( y i) = λi yi! ( λi mi! ), mi = 0 Where: P(y i ) is the probability of commuter i using the system y i times per week, λ i is the Poisson parameter for commuter i; m i is the number of uses per week; and r is the right truncation (in this case, 5 times per week).

28 Negative Binomial Regression Model Poisson distribution that restricts the mean and variance to be equal: E[y i ] = VAR[y i ]. If this equality does not hold, the data are said to be under dispersed (E[y i ] > VAR[y i ]) or overdispersed (E[y i ] < VAR[y i ]), and the coefficient vector will be biased if corrective measures are not taken. To account for cases when E[y i ] VAR[y i ], a negative binomial model is used. The negative binomial model is derived by rewriting the λ i equation such that, λ i = EXP(βX i + ε i ) where EXP(ε i ) is a Gamma-distributed error term with mean 1 and variance α 2. The addition of this term allows the variance to differ from the mean as below, VAR[y i ] = E[y i ][1+ αe[y i ]] = E[y i ]+ αe[y i ] 2 The Poisson regression model is regarded as a limiting model of the negative binomial regression model as α approaches zero, which means that the selection between these two models is dependent upon the value of α. The parameter α is referred to as the overdispersion parameter. The negative binomial distribution has the form, 1 α Γ((1 α) + y ) i 1 α λ i Py ( i) = Γ(1 α) yi! (1 α) + λi (1 α) + λi y i

29 where Γ(.) is a gamma function. This results in the likelihood function, 1 α Γ((1 α) + y ) i 1 α λ i L( λi ) = i Γ(1 α) yi! (1 α) + λi (1 α) + λi y i Zero-Inflated Poisson and Negative Binomial Regression Models Zero events can arise from two qualitatively different conditions. 1. One condition may result from simply failing to observe an event during the observation period. 2. Another qualitatively different condition may result from an inability to ever experience an event. Two states can be present, one being a normal count-process state and the other being a zerocount state. A zero-count state may refer to situations where the likelihood of an event occurring is extremely rare in comparison to the normal-count state where event occurrence is inevitable and follows some know count process Two aspects of this non qualitative distinction of the zero state are noteworthy: 1. There is a preponderance of zeroes in the data more than would be expected under a Poisson process. 2. A sampling unit is not required to be in the zero or near zero state into perpetuity, and can move from the zero or near zero state to the normal count state with positive probability. Data obtained from two-state regimes (normal-count and zero-count states) often suffer from overdispersion if considered as part of a single, normal-count state because the number of zeroes is inflated by the zero-count state.

30 Zero-inflated Poisson (ZIP) Assumes that the events, Y = (y 1, y 2,,y n ), are independent and the model is ( ) ( λ ) y = 0 with probability p + 1 p EXP y i i i i i = y with probability ( 1 p ) EXP( λ ) y i i i y! λ. where y is the number of events per period. Zero-inflated negative binomial (ZINB) regression model follows a similar formulation with events, Y = (y 1, y 2,, y n ), being independent and, 1 yi = 0 with probability pi + ( 1 pi) α 1 λ i α + 1 1 α y Γ + y ui (1 ui) α yi = y with probability ( 1 pi), y=1, 2, 3... 1 Γ y! α where ( 1 ) ( 1 ) u = α α + λ i i. Zero-inflated models imply that the underlying data-generating process has a splitting regime that provides for two types of zeros. The splitting process can be assumed to follow a logit (logistic) or probit (normal) probability process, or other probability processes. 1 α

31 A point to remember is that there must be underlying justification to believe the splitting process exists (resulting in two distinct states) prior to fitting this type of statistical model. There should be a basis for believing that part of the process is in a zero-count state. To test the appropriateness of using a zero-inflated model rather than a traditional model, Vuong (1989) proposed a test statistic for non-nested models that is well suited for situations where the distributions (Poisson or negative binomial) are specified. The statistic is calculated as (for each observation i), ( i i) ( i i) f y X = 1 mi LN f 2 y X where: f 1 (y i X i ) is the probability density function of model 1, and f 2 (y i X i ) is the probability density function of model 2. Using this, Vuongs' statistic for testing the non-nested hypothesis of model 1 versus model 2 is (Greene, 2000; Shankar et al., 1997), V n 1 i n i = 1 = = n 2 1 n i = 1 n m ( m i m ) n S ( m ) m Where: m is the mean ( ( 1 n ) n i = 1 m i ), S m is standard deviation, Vuongs' value is asymptotically standard normal distributed (to be compared to z-values), and if V is less than V critical (1.96 for a 95% confidence level), the test does not support the selection of one model over another. Large positive values of V greater than V critical favor model 1 over model 2, whereas large negative values support model 2.

32 Because overdispersion will almost always include excess zeros, it is not always easy to determine whether excess zeros arise from true overdispersion or from an underlying splitting regime. This could lead one to erroneously choose a negative binomial model when the correct model may be a zero-inflated Poisson. The use of a zero-inflated model may be simply capturing model mispecification that could result from factors such as unobserved effects (heterogeneity) in the data. Discrete Outcome Models Examples of discrete data (unordered): Mode of travel (automobile, bus, rail transit), Type or class of vehicle owned, and Type of a vehicular accident (run-off-road, rear-end, head-on, etc.). Examples of discrete data (ordered): telecommuting-frequency data that have outcomes of never, sometimes, and frequently In contrast to data that are not ordered, ordinal discrete data possess additional information on the ordering of responses that can be used to improve the efficiency of the model s parameter estimates Models of Discrete Data For unordered discrete outcomes, start with a linear function of covariates that influences specific discrete outcomes. For example, in the event of a vehicular accident, possible discrete crash outcomes are rearend, sideswipe, run-off-road, head-on, turning, and other.

33 Let T in be a linear function that determines discrete outcome i for observation n such that, T in = β i X in, Where: β i is a vector of estimable parameters for discrete outcome i, X in is a vector of the observable characteristics (covariates) that determine discrete outcomes for observation n. To arrive at a statistically estimable probabilistic model, a disturbance term ε in is added, giving T in = β i X in + ε in. Reasons for adding a disturbance term: 1. variables have been omitted from the function (some important data may not be available), 2. the functional form may be incorrectly specified (it may not be linear), 3. proxy variables may be used (variables that approximate missing variables in the database), 4. variations in β i that are not accounted for (β i may vary across observations). To derive an estimable model of discrete outcomes with I denoting all possible outcomes for observation n, and P n (i) being the probability of observation n having discrete outcome i (i I) P n (i) = P(T in T In ) I i. By substituting for T in, P n (i) = P(β i X in + ε in β I X In + ε In ) I i

34 or, P n (i) = P(β i X n β I X n ε In ε in ) I i. Estimable models are developed by assuming a distribution of the random disturbance term, ε s. Binary and Multinomial Probit Models Probit models arise when the disturbance term ε In is assumed to be normally distributed. In the binary case (two outcomes, denoted 1 or 2) P n (1) = P(β 1 X 1n β 2 X 2n ε 2n ε 1n ) This equation estimates the probability of outcome 1 occurring for observation n, where ε 1n and ε 2n are normally distributed with mean = 0, variances σ 2 1 and σ 2 2 respectively and the covariance is σ 12. An attractive feature of normally distributed variates is that the addition or subtraction of two normal variates also produces a normally distributed variate. In this case ε 2n ε 1n is normally distributed with mean zero and variance σ 2 1 + σ 2 2 - σ 12. The resulting cumulative normal function is () 1 ( β X β X ) σ 1 1n 2 2 n 1 1 = 2 P n EXP w dw 2π 2 If Φ ( ) is the standardized cumulative normal distribution, then P β X β X = σ 1 1n 2 2n n () 1 Φ where σ = (σ 2 1 + σ 2 2 - σ 12 ) 0.5.

35 The term 1/σ is a scaling of the function determining the discrete outcome and can be set to any positive value, although σ = 1 is typically used. P n (1) 1.0 0.5 β 1 X 1n β 2 X 2n General shape of probit outcome probabilities. The parameter vector (β) is readily estimated using standard maximum likelihood methods. If δ in is defined as being equal to 1 if the observed discrete outcome for observation n is i and zero otherwise, the likelihood function is N I in = δ, n= 1 i= 1 () L P i

36 where N is the total number of observations. In the binary case with i = 1 or 2, the loglikelihood is, N β X 1 1 n β X 2 2 n β X ( ) 1 1 n β X 2 2 n LL = δ1nlnφ + 1 δ1 n LNΦ n= 1 σ σ The problem with the multinomial probit is that the outcome probabilities are not closed form and estimation of the likelihood functions requires numerical integration. The difficulties of extending the probit formulation to more than two discrete outcomes have lead researchers to consider other disturbance term distributions. Multinomial Logit Model From a model estimation perspective, a desirable property of an assumed distribution of disturbances (ε s) is that the maximums of randomly drawn values from the distribution have the same distribution as the values from which they were drawn. The normal distribution does not posses this property (the maximums of randomly drawn values from the normal distribution are not normally distributed). A disturbance term distribution with such a property greatly simplifies model estimation because it could be applied to the multinomial case by replacing β 2 X 2n with the highest value (maximum) of all other β I X In 1. Distributions of the maximums of randomly drawn values from some underlying distribution are referred to as extreme value distributions (Gumbel, 1958). Extreme value distributions are categorized into three families: Type 1, Type 2, and Type 3 (see Johnson and Kotz, 1970). The most common extreme value distribution is the Type 1 distribution (sometimes referred to as the Gumbel distribution). It has the desirable property that maximums

37 of randomly drawn values from the extreme value Type 1 distribution are also extreme value Type 1 distributed. The probability density function of the extreme value Type 1 distribution is, ( ) = ( ) ( ) ( ( ( ))) f ε ηexp -η ε-ω EXP -EXP -η ε-ω with corresponding distribution function ( ( )) ( ε) = η( ε ω) F EXP -EXP - - where: η is a positive scale parameter, ω is a location parameter (mode), and the mean is ω + 0.5772/η. To derive an estimable model based on the extreme value Type 1 distribution, a revised version of the probability equation is P i P X X () = β + ε max( β + ε ) n i in in I In In I i For the extreme value Type 1 distribution, if all ε In are independently and identically (same variances) distributed random variates with modes ω In and a common scale parameter η (which implies equal variances), then the maximum of β I X In + ε In s is extreme value Type 1 distributed with mode 1 LN EXP IX η I i ( ηβ ) In and scale parameter η (see Gumbel 1958).

38 f (x) 0.8 0.8 0.6 η = 0.5 0.4 ω = 0 η = 1 η = 2-2 0 2 4 6 x Illustration of an extreme value Type I distribution. If ε n ' is a disturbance term associated with the maximum of all possible discrete outcomes i with mode equal to zero and scale parameter η, and β 'X n ' is the parameter and covariate product associated with the maximum of all possible discrete outcomes i, then it can be shown that

39 ' ' 1 β X = LN EXP X ( ηβ ) n I In η I i This result arises because for extreme value Type 1 distributed variates, ε, the addition of a positive scalar constant say, a, changes the mode from ω to ω + a without affecting the scale parameter η. So, if ε n ' has mode equal to zero and scale parameter η, adding the scalar 1 η I i ( ηβ ) L N EXP X I In gives an extreme value distributed variate with mode (β 'X n ') 1 equal to LN EXP ( ηβ I X In ) and scale parameter η. η I i Using these results, the probability equation is written as, P i P β X X ' ' ' n () = i in + εin β n + ε n or, P i P X - βx ' ' ' () = β + ε + ε 0 n n n i in in And, because the difference between two independently distributed extreme value Type 1 variates with common scale parameter η is logistic distributed, rearranging terms, () P i = n ' ' 1 1 + EXP η ( β X n - βix in) ( X ) EXP ηβi in Pn () i = EXP ' ' ηβ ( i X in) + EXP ηβ ( X n)

40 1 Substituting with ' ' β X = LN EXP ( ηβ X ) and setting η = 1 (there is no loss of generality) the equation becomes n I In η I i n () P i = EXP [ β X ] EXP[ βix in] + EXP LN exp ( βix In) I i i in or, n () P i EXP = EXP I [ βix in] ( β X ) I In which is the standard multinomial logit formulation. For estimation of the parameter vectors (β s) by maximum likelihood, the log-likelihood function is, N I LL = δin βi X in - LN EXP ( βi X In ) n= 1 i= 1 I where I is the total number of outcomes, δ is as previously defined, and all other variables are as defined previously. When applying the multinomial logit model it is important to realize that the choice of the extreme value Type 1 distribution is made on the basis of computational convenience, although this distribution is similar to the normal distribution.

41 P (i) 1.0 Probit Logit 0.5 β 1 X 1n β 2 X 2n Figure 11-3: Comparison of binary logit and probit outcome probabilities. Discrete Data and Utility Theory From economics, utility (satisfaction) is maximized subject to the prices of the alternatives and an income constraint. Because utility theory consists of decision-makers selecting a utility maximizing alternative based on prices of alternatives and an income constraint, any purchase affects the remaining income and thus all purchases are interrelated. Problem: one theoretically cannot isolate specific choice situations. Restrictions must be placed on utility functions. To illustrate these, a utility function is defined that is determined by the consumption of m goods (y 1, y 2,, y m ) such that u = f(y 1, y 2,, y m )

42 As an extremely restrictive case it is assumed that the consumption of one good is independent of the consumption of all other goods. The utility function is then written as u = f 1 (y 1 ) + f 2 (y 2 ) +..+ f m (y m ) This is referred to as an additive utility function and, in nearly all applications, it is unrealistically restrictive. Example: the application of such an assumption implies that the acquisition of two types of breakfast cereal are independent although it is clear that the purchase of one will affect the purchase of the other. A more realistic restriction on the utility function is to separate decisions into groups and to assume that consumption of goods within the groups is independent of those goods in other groups. This is referred to as separability and is an important construct in applied economic theory. It is this property that permits the focus on specific choice groups such as the choices of travel mode to work. Indirect Utility Normal or direct utility has utility that is maximized subject to an income constraint and this maximization produces a demand for goods y 1, y 2,, y m. When applying discrete outcome models, the utility function is typically written with prices and incomes as arguments. When the utility function is written in this way, the utility function is indirect, and it can be shown that the relationship between this indirect utility and the resulting demand equation for some good m is given by Roy's identity y V p = V Inc 0 m m

43 Where: V is the indirect utility, p m is the price of good m, Inc is the decision-maker's income, and y 0 m is the utility maximizing demand for good m. Applying the utility framework within discrete outcome models is straightforward. Using the notation above, T becomes the utility determining the choice (as opposed to a function determining the outcome). But the derivations of discrete outcome models imply that the model is compensatory. Changes in factors that determine the function T in for each discrete outcome do not matter as long as the total value of the function remains the same. This is potentially problematic in some utility-maximizing choice situations. Properties and Estimation of Multinomial Logit Models Consider a commuter's choice of route from home to work where the choices are to take an arterial, a two-lane road, or a freeway. ( ) P a = e V Va Vt e + e + e a V f, P() t = e V Va Vt e + e + e t V f, P( f ) = e V Va Vt e + e + e f V f where P(a), P(t) and P(f), are the probabilities that commuter n selects the arterial, two-lane road and freeway respectively and V a, V t and V f are corresponding indirect utility functions. Variables defining these functions are classified into two groups:

44 1. those that vary across outcome alternatives (in route choice, distance and number of traffic signals) 2. those that do not vary across outcome alternatives (Commuter income and other commuter-specific characteristics such as number of children, number of vehicles, and age of commuting vehicle). The distinction between these two sets of variables is important, because the MNL model is derived using the difference in utilities. Because of this differencing, estimable parameters relating to variables that do not vary across outcome alternatives can, at most, be estimated in I-1 of the functions determining the discrete outcome (I is the total number of discrete outcomes). The parameter of at least one of the discrete outcomes must be normalized to zero to make parameter estimation possible (this is illustrated in a forthcoming example). Given these two variables types, the utility functions for Equation 11.26 are defined as V a = β 1a + β 2a X a + β 3a Z V t = β 1t + β 2t X t + β 3t Z, V f = β 1f + β 2f X f + β 3f Z

45 Where: X a, X t and X f are vectors of variables that vary across arterial, twolane, and freeway choice outcomes respectively, as experienced by commuter n, Z is a vector of characteristics specific to commuter n, β 1 's are constant terms, β 2 's are vectors of estimable parameters corresponding to outcomespecific variables in X vectors, and β 3 's are vectors corresponding to variables that do not vary across outcome alternatives. Note that the constant terms are effectively the same as variables that do not vary across alternate outcomes and at most are estimated for I-1 of the outcomes. Statistical Evaluation To determine if the estimated parameter is significantly different from zero, the t- statistic is: β - 0 t = S.E. ( β ) where S.E.(β) is the standard error of the parameter. Note that because the MNL is derived from an extreme value distribution and not a normal distribution, the use of t-statistics is not strictly correct although in practice it is a reliable approximation of the true significance. A more general and appropriate test is the likelihood ratio test.

46 The likelihood ratio test statistic is -2[LL(β R ) LL(β U )] where LL(β R ) is the log-likelihood at convergence of the "restricted" model and LL(β U ) is the log-likelihood at convergence of the "unrestricted" model. This statistic is χ 2 distributed with degrees of freedom equal to the difference in the numbers of parameters between the restricted an unrestricted model (the difference in the number of parameters in the β R and the β U parameter vectors). Overall model fit is the ρ 2 statistic (it is similar to R 2 in regression models in terms of purpose). The ρ 2 statistic is: 2 ρ = 1 LL LL ( β ) ( 0) where LL(β) is the log-likelihood at convergence with parameter vector β and LL(0) is the initial log-likelihood (with all parameters set to zero). As is the case with R 2 in regression analysis, the disadvantage of the ρ 2 statistic is that it will always improve as additional parameters are estimated even though the additional parameters may be statistically insignificant. To account for the estimation of potentially insignificant parameters a corrected ρ 2 is estimated as 2 corrected 1 ( β ) ( 0) LL - K ρ = LL where K is the number of parameters estimated in the model.