Nonlinear relationships Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised February 20, 2015

Similar documents
Soc 63993, Homework #7 Answer Key: Nonlinear effects/ Intro to path analysis

Sociology 63993, Exam 2 Answer Key [DRAFT] March 27, 2015 Richard Williams, University of Notre Dame,

Sociology Exam 2 Answer Key March 30, 2012

Nonlinear Regression Functions

Empirical Application of Simple Regression (Chapter 2)

Interaction effects between continuous variables (Optional)

S o c i o l o g y E x a m 2 A n s w e r K e y - D R A F T M a r c h 2 7,

Lab 07 Introduction to Econometrics

Immigration attitudes (opposes immigration or supports it) it may seriously misestimate the magnitude of the effects of IVs

Group Comparisons: Differences in Composition Versus Differences in Models and Effects

Lab 6 - Simple Regression

Lecture notes to Stock and Watson chapter 8

Using the same data as before, here is part of the output we get in Stata when we do a logistic regression of Grade on Gpa, Tuce and Psi.

Nonrecursive Models Highlights Richard Williams, University of Notre Dame, Last revised April 6, 2015

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, Last revised January 20, 2018

Lecture 3: Multiple Regression. Prof. Sharyn O Halloran Sustainable Development U9611 Econometrics II

Specification Error: Omitted and Extraneous Variables

Essential of Simple regression

Sociology 593 Exam 2 Answer Key March 28, 2002

General Linear Model (Chapter 4)

Multicollinearity Richard Williams, University of Notre Dame, Last revised January 13, 2015

Lab 3: Two levels Poisson models (taken from Multilevel and Longitudinal Modeling Using Stata, p )

Problem Set 10: Panel Data

Case of single exogenous (iv) variable (with single or multiple mediators) iv à med à dv. = β 0. iv i. med i + α 1

Applied Statistics and Econometrics

Heteroskedasticity Richard Williams, University of Notre Dame, Last revised January 30, 2015

Binary Dependent Variables

Lecture 7: OLS with qualitative information

1 A Review of Correlation and Regression

Lab 10 - Binary Variables

Outline. Linear OLS Models vs: Linear Marginal Models Linear Conditional Models. Random Intercepts Random Intercepts & Slopes

ESTIMATING AVERAGE TREATMENT EFFECTS: REGRESSION DISCONTINUITY DESIGNS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

ECON3150/4150 Spring 2016

Lecture#12. Instrumental variables regression Causal parameters III

5. Let W follow a normal distribution with mean of μ and the variance of 1. Then, the pdf of W is

At this point, if you ve done everything correctly, you should have data that looks something like:

Statistical Modelling in Stata 5: Linear Models

Statistical Inference with Regression Analysis

1 Linear Regression Analysis The Mincer Wage Equation Data Econometric Model Estimation... 11

ECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47

ECON3150/4150 Spring 2016

Control Function and Related Methods: Nonlinear Models

Lecture (chapter 13): Association between variables measured at the interval-ratio level

Linear Regression with Multiple Regressors

ECON 497 Final Exam Page 1 of 12

SOCY5601 Handout 8, Fall DETECTING CURVILINEARITY (continued) CONDITIONAL EFFECTS PLOTS

Section Least Squares Regression

BIOSTATS 640 Spring 2018 Unit 2. Regression and Correlation (Part 1 of 2) STATA Users

Lecture 3.1 Basic Logistic LDA

ECO220Y Simple Regression: Testing the Slope

Lecture 4: Multivariate Regression, Part 2

STATISTICS 110/201 PRACTICE FINAL EXAM

Section I. Define or explain the following terms (3 points each) 1. centered vs. uncentered 2 R - 2. Frisch theorem -

Chapter 8. Linear Regression. The Linear Model. Fat Versus Protein: An Example. The Linear Model (cont.) Residuals

Fixed and Random Effects Models: Vartanian, SW 683

Announcements. J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, / 45

Sociology Exam 1 Answer Key Revised February 26, 2007

Econometrics Homework 1

sociology sociology Scatterplots Quantitative Research Methods: Introduction to correlation and regression Age vs Income

Generalized linear models

Lecture 4: Multivariate Regression, Part 2

Week 3: Simple Linear Regression

Lecture 3 Linear random intercept models

1 The basics of panel data

ECON3150/4150 Spring 2015

Linear Modelling in Stata Session 6: Further Topics in Linear Modelling

Final Exam. Question 1 (20 points) 2 (25 points) 3 (30 points) 4 (25 points) 5 (10 points) 6 (40 points) Total (150 points) Bonus question (10)

Monday 7 th Febraury 2005

Empirical Asset Pricing

Midterm 2 - Solutions

Empirical Application of Panel Data Regression

Chapter 8. Linear Regression. Copyright 2010 Pearson Education, Inc.

ECON Introductory Econometrics. Lecture 7: OLS with Multiple Regressors Hypotheses tests

Practice exam questions

ECON Introductory Econometrics. Lecture 5: OLS with One Regressor: Hypothesis Tests

(a) Briefly discuss the advantage of using panel data in this situation rather than pure crosssections

Correlation and Simple Linear Regression

Ex: Cubic Relationship. Transformations of Predictors. Ex: Threshold Effect of Dose? Ex: U-shaped Trend?

Nonrecursive models (Extended Version) Richard Williams, University of Notre Dame, Last revised April 6, 2015

Longitudinal Data Analysis Using Stata Paul D. Allison, Ph.D. Upcoming Seminar: May 18-19, 2017, Chicago, Illinois

2. We care about proportion for categorical variable, but average for numerical one.

Handout 12. Endogeneity & Simultaneous Equation Models

Regression #8: Loose Ends

1 Warm-Up: 2 Adjusted R 2. Introductory Applied Econometrics EEP/IAS 118 Spring Sylvan Herskowitz Section #

THE MULTIVARIATE LINEAR REGRESSION MODEL

Problem Set 1 ANSWERS

7 Introduction to Time Series

1: a b c d e 2: a b c d e 3: a b c d e 4: a b c d e 5: a b c d e. 6: a b c d e 7: a b c d e 8: a b c d e 9: a b c d e 10: a b c d e

Comparing Logit & Probit Coefficients Between Nested Models (Old Version)

appstats8.notebook October 11, 2016

Acknowledgements. Outline. Marie Diener-West. ICTR Leadership / Team INTRODUCTION TO CLINICAL RESEARCH. Introduction to Linear Regression

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture!

1 Independent Practice: Hypothesis tests for one parameter:

Introduction to Econometrics. Review of Probability & Statistics

Quantitative Methods Final Exam (2017/1)

Heteroskedasticity Example

9. Linear Regression and Correlation

Course Econometrics I

Functional Form. So far considered models written in linear form. Y = b 0 + b 1 X + u (1) Implies a straight line relationship between y and X

Interpreting coefficients for transformed variables

Transcription:

Nonlinear relationships Richard Williams, University of Notre Dame, https://www.nd.edu/~rwilliam/ Last revised February, 5 Sources: Berry & Feldman s Multiple Regression in Practice 985; Pindyck and Rubinfeld s Econometric Models and Economic Forecasts 99 edition; McClendon s Multiple Regression and Causal Analysis, 99; SPSS s Curvefit documentation. Also see Hamilton s Statistics with Stata, Updated for Version 9, for more on how Stata can handle nonlinear relationships. ity versus additivity. Remember again that the general linear model is Y = α + β X + β X +... + β X + ε = α + β X + ε = E( Y X ) + ε j j j k kj j i ij j i= The assumptions of linearity and additivity are both implicit in this specification. Additivity = assumption that for each IV X, the amount of change in E(Y) associated with a unit increase in X (holding all other variables constant) is the same regardless of the values of the other IVs in the model. That is, the effect of X does not depend on X; increasing X from to will have the same effect regardless of whether X = or X =. With non-additivity, the effect of X on Y depends on the value of a third variable, e.g. gender. As we ve just discussed, we use models with multiplicative interaction effects when relationships are non-additive. k j j ity = assumption that for each IV, the amount of change in the mean value of Y associated with a unit increase in the IV, holding all other variables constant, is the same regardless of the level of X, e.g. increasing X from to will produce the same amount of increase in E(Y) as increasing X from to. Put another way, the effect of a unit increase in X does not depend on the value of X. With nonlinearity, the effect of X on Y depends on the value of X; in effect, X somehow interacts with itself. This is sometimes refered to as a self interaction. The interaction may be multiplicative but it can take on other forms as well, e.g. you may need to take logs of variables. Examples: Nonlinear Relationships Page

YN YEXP -6-8 - - - - Exponential X X Dealing with Nonlinearity in variables. We will see that many nonlinear specifications can be converted to linear form by performing transformations on the variables in the model. For example, if Y is related to X by the equation E( Y ) = α + βx i and the relationship between the variables is therefore nonlinear, we can define a new variable Z = X. The new variable Z is then linearly related to Y, and OLS regression can be used to estimate the coefficients of the model. There are numerous other cases where, given appropriate transformations of the variables, nonlinear relationships can be converted into models for which coefficients can be estimated using OLS. We ll cover a few of the most important and common ones here, but there are many others. Detecting nonlinearity and nonadditivity. The key question is whether the slope of the relationship between an IV and a DV can be expected to vary depending on the context. The first step in detecting nonlinearity or nonadditivity is theoretical rather than technical. Once the nature of the expected relationship is understood well enough to make a rough graph of it, the technical work should begin. Hence, ask such questions as, can the slope of the relationship between Xi and E(Y) be expected to have the same sign for all values of Xi? Should we expect the magnitude of the slope to increase as Xi increases, or should we expect the magnitude of the slope to decrease as Xi increases? Can do scatterplots of the IV against the DV. Sometimes, nonlinearity will be obvious. Can often do incremental F tests or Wald tests like we have used in other situations. Stata s estat ovtest command can also be used in some cases; see below. Types of nonlinearity. Polynomial models. Some variables have a curvilinear relationship with each other. Increases in X initially produce increases in Y, but after a while subsequent increases in X produce declines in Y, e.g. i Nonlinear Relationships Page

YN -6-8 - - X Polynomial models can estimate such relationships. A polynomial model can be appropriate if it is thought that the slope of the effect of Xi on E(Y) changes sign as Xi increases. For many such models, the relationship between Xi and E(Y) can be accurately reflected with a specification in which Y is viewed as a function of Xi and one or more powers of Xi, as in M Y = α + β X + β X + β X +... + β X + ε The graph of the relationship between X and E(Y) consists of a curve with one or more bends, points at which the slope of the curve changes signs. The numbers of bends nearly always equals M -. For M =, the curve bends (changes sign) when X = -b/b. If this value appears within the meaningful range of X, the relationship is nonmonotonic. If this value falls outside the meaningful range of X, the relationship appears monotonic (i.e. Y always decreases or increases as X increases.) This model is easily estimated simply compute X = X, X = X, etc., and regress Y on these terms. (In practice, we usually stop at M = or M = ). Or, in Stata or higher, use factor variables, e.g. c.x#c.x (equivalent to X ), c.x#c.x#c.x (equivalent to X ). Example: In psychology, the Yerkes-Dodson law predicts that the relationship between physiological arousal and performance will follow an inverted U-shaped function, i.e. higher levels of arousal initially increase performance, but after a certain level of arousal is achieved additional arousal decreases performance. INTERPRETATION. When M =, the b coefficient indicates the overall linear trend (positive or negative) in the relationship between X and Y across the observed data. The b coefficient indicates the direction of curvature. If the relationship is concave upward, b is positive, if concave downward b is negative. For example, a positive coefficient for X and a negative coefficient for X cause the curve to rise initially and then fall. More generally, a polynomial of order k will have a maximum of k- bends (k- points at which the slope of the curve changes direction); for example, a cubic equation (which includes X, X, and X ) can have bends. Note that the bends do not necessarily have to occur within the observed values of the Xs. M Nonlinear Relationships Page

SOME POLYNOMIAL MODELS, WITH QUADRATIC TERMS: [Note: These are often refered to as quadratic models.] b positive, b positive; Y = X + X b positive, b negative; Y = X - X YPP YPN - - - - X X b negative, b positive; Y = X + X b negative, b negative; Y = X - X YNP YNN - - - - X X b zero, b positive; Y = X b zero, b negative; Y = -X YP YN 8 6-6 -8 - - - X X Nonlinear Relationships Page

SOME POLYNOMIAL MODELS, WITH CUBIC TERMS: [NOTE: These are often referred to as cubic models.] b positive, b positive, b negative; Y = X + X - X b negative, b positive, b positive; Y = X + X + X YPPN YNPP - - - Cubic - Cubic X X b zero, b zero, b positive; Y = X b zero, b zero, b negative; Y = - X YP YN - - - Cubic - Cubic X X Testing whether polynomial terms are needed. As usual, you can use incremental F tests or Wald tests to test whether polynomial terms belong in a model. In the following example, x is x^, x is x^, and x is x^.. use https://www.nd.edu/~rwilliam/statafiles/nonlin.dta, clear. nestreg, quietly: reg y x (x x x) Nonlinear Relationships Page 5

Block : x Block : x x x +-------------------------------------------------------------+ Block Residual Change Block F df df Pr > F R in R -------+-----------------------------------------------------. 59.78.56. 56..665.69 +-------------------------------------------------------------+ This shows us that at least one polynomial term should be in the model. Or, using a Wald test,. quietly reg y x x x x. test x x x ( ) x = ( ) x = ( ) x = F(, 56) =. Prob > F =. Using factor variable notation,. reg y x c.x#c.x c.x#c.x#c.x c.x#c.x#c.x#c.x Source SS df MS Number of obs = 6 -- F(, 56) = 7.7 Model 8.97 79.6 Prob > F =. Residual 7.755 56 99.5675 R-squared =.665 -- Adj R-squared =.66 Total 6669.869 6.59 Root MSE = 9.988 ------- y Coef. Std. Err. t P> t [95% Conf. Interval] ------------------------------------------- x -5.95.6785 -.6.5 -.586.9985 c.x#c.x.86.857.6.796-5.687 7.97 c.x#c.x#c.x.78.59779.87.6.569.97 c.x#c.x#c.x#c.x.9857.8978.5.5.99655.765 _cons.8978.89.7.86-8.789.78 -------. test c.x#c.x c.x#c.x#c.x c.x#c.x#c.x#c.x ( ) c.x#c.x = ( ) c.x#c.x#c.x = ( ) c.x#c.x#c.x#c.x = F(, 56) =. Prob > F =. Stata also provides the estat ovtest command (ov = omitted variables; you can just use ovtest for short). In its default form, ovtest regresses y on yhat^, yhat^, and yhat^. A significant test statistic indicates that polynomial terms should be added. In this particular Nonlinear Relationships Page 6

example, ovtest gives the same results as above, but that wouldn t necessarily be true in a more complicated model.. quietly reg y x. ovtest Ramsey RESET test using powers of the fitted values of y Ho: model has no omitted variables F(, 56) =. Prob > F =. As Appendix A explains in more detail, there are various ways to plot the relationship between y and x. Here I will use the user-written routine curvefit, which produces curve estimation regression statistics and related plots between two variables for 5 different curve estimation regression models. Look at the help file to get the codes for the functions you want. In this case I am telling curvefit to fit and display a linear model (function ) and a th order polynomial model (function h).. curvefit y x, f( h) Curve Estimation between y and x ------------------------------------------ Variable Polynomial b _cons.98.89778.86.7..865 b _cons.675-5.95.79 -.6.78.5 b _cons.86.6.7958 b _cons.777.87.58 b _cons.98567.5.8 Statistics N 6 6 r_a.5557.6575 ------------------------------------------ legend: b/t/p These are the same coefficient estimates we got before. Here is the graph produced by curvefit: Nonlinear Relationships Page 7

-5 5 5 Curve fit for y x Fit order Polynomial Fit That is a little bizarre looking (but then again these are fake data!) In the probably more common case where you just had a squared term, it would look something like this (function = linear, function = quadratic). curvefit y x, f( ) Curve Estimation between y and x ------------------------------------------ Variable b _cons.98-6.7.86 -.5..8 b _cons.675.675.79.66.78. b _cons 8.6997 8.9. Statistics N 6 6 r_a.5557.567788 ------------------------------------------ legend: b/t/p Nonlinear Relationships Page 8

-5 5 5 Curve fit for y x Fit Fit A. Exponential models Growth Models. We often think that variables will increase exponentially rather than arithmetically. For example, each year of education may be worth an additional 5% income, rather than, say, $,. Hence, for somebody who would otherwise make $, a year, an additional year of education would raise their income $,. For those who would otherwise be expected to make $,, an additional year could be worth $,. Note that the actual dollar amount of the increase is different, but the percentage increase is the same. Such relationships can often be modeled as Y = e ( α + βx + ε ) When β is positive, the curve has positive slope throughout, but the slope gradually increases in magnitude as X increases. When β is negative, the curve has a negative slope throughout and the slope gradually decreases in magnitude as X increases, with the curve approaching the X axis as Y gets infinitely large. (NOTE: This is often called a growth model.) When β is positive and small in magnitude (around.5 or less) β * is approximately equal to the percentage increase in E(Y) associated with a unit increase in X, e.g. if β =., then a unit increase in X will produce about a % increase in E(Y). Here is a graph of such a relationship. The curved line is a plot of X versus Y, where there is an exponential relationship between the two. Nonlinear Relationships Page 9

YEXP - - Exponential X Following is an example of an exponential growth model. It shows the problems that occur if you instead use a linear model of constant growth. Nonlinear Relationships Page

Exponential (growth) model. Income starts at $, and grows % a year, compounded annually Year Income Increase in $ Regr Prediction LN(Income) Increase in ln($) Regr Prediction $,. ($,79.8) 9. 9. $,. $,. ($7,65.7) 9.5656.95 9.565 $,. $,. ($,.) 9.967.95 9.96 $,. $,. $,65. 9.9679.95 9.967 5 $,6. $,. $6,.6 9.5958.95 9.5958 6 $ 6,5. $,6. $,86.96 9.68689.95 9.68689 7 $ 7,75.6 $,6.5 $5,9. 9.785.95 9.78 8 $ 9,87.7 $,77.56 $,8.67 9.87756.95 9.8775 9 $,5.89 $,98.7 $,77. 9.9788.95 9.978 $,579.8 $,.59 $9,75.8.68.95.68 $ 5,97. $,57.95 $,.7.6.95.6 $ 8,5.7 $,59.7 $8,6.9.5875.95.5875 $,8.8 $,85. $,6.5.56.95.56 $,5.7 $,8. $7,888.8.97.95.97 5 $ 7,97.98 $,5.7 $5,57.6.568.95.568 6 $,77.8 $,797.5 $57,5.5.6999.95.6999 7 $ 5,99.7 $,77.5 $6,77.88.75.95.75 8 $ 5,5.7 $,59.97 $66,..86.95.86 9 $ 55,599.7 $ 5,5.7 $7,.59.959.95.959 $ 6,59.9 $ 5,559.9 $75,658.9..95. $ 67,75. $ 6,5.9 $8,87..65.95.65 $ 7,.5 $ 6,77.5 $8,95.66.85.95.85 $ 8,.75 $ 7,.5 $89,5..76.95.76 $ 89,5. $ 8,.7 $9,7.7.75.95.7 5 $ 98,97. $ 8,95. $98,8.7.97785.95.9778 6 $8,7.6 $ 9,89.7 $,9.8.5995.95.599 7 $9,8.77 $,8.7 $8,57..6885.95.688 8 $,99.9 $,98.8 $,685.8.7875.95.787 9 $,9.9 $,9.99 $7,.5.8795.95.879 $58,6.9 $,.99 $,9.5.976.95.97 Average $ 5,8. $ 5,8. Income Regressed on Year Log Income Regressed on year $8,. $6,. $,. $,. $,. Income $8,. $6,. Actual Income Predicted Income Income Actual & Predicted Income $,. $,. 9 $- 5 5 5 5 $(,.) $(,.) Year 8 5 5 5 5 Year Note that income increases % per year. In absolute terms, the growth is small at first ($, a year) and then gets bigger and bigger ($, in year ). The linear regression model (left-hand side) predicts a constant growth of about $,68.6 a year. Hence, it overestimates growth in the early years and underestimates it later. OLS works much better with the exponential growth model (right-hand side), where the dependent variable is the log of income. Note that e.95 =., which shows that there is % annual growth. Nonlinear Relationships Page

Estimation. To estimate the exponential model using OLS: The traditional (but often inferior) approach has been to take the log of both sides of the equation, yielding lny = α + βx + ε We therefore merely compute a new variable which equals ln Y and regress the Xs on it. In Stata we could do something like. use "https://www.nd.edu/~rwilliam/statafiles/nonlinln.dta", clear. gen lninc = ln(inc). reg lninc year Source SS df MS Number of obs = -- F(, 8) =. Model 8.598 8.598 Prob > F =. Residual.6756 8.887 R-squared =.8 -- Adj R-squared =.87 Total.678 9.7676 Root MSE =.589 lninc Coef. Std. Err. t P> t [95% Conf. Interval] ------------------------------------ year.8959.757.8..78.59 _cons.9.785 8.7..58.65599 The potential problem with this approach is that the log of is undefined; ergo, any cases with (or for that matter negative) values will get dropped from the analysis. Further, most of us don t think in terms of logs of variables; we would rather see how X is related to the unlogged Y. It is therefor often better to estimate this model: EY ( ) = e α βx ( + ) When you do this, Y itself can equal ; all that is required is that its expected value be greater than zero. In Stata, we can estimate this as a generalized linear model with link log. The commands are. glm inc year, link(log) Generalized linear models No. of obs = Optimization : ML Residual df = 8 Scale parameter = 9.7657 Deviance = 7.59 (/df) Deviance = 9.7657 Pearson = 7.59 (/df) Pearson = 9.7657 Variance function: V(u) = [Gaussian] Link function : g(u) = ln(u) [Log] AIC = 9.988 Log likelihood =.7766 BIC = 66. OIM inc Coef. Std. Err. z P> z [95% Conf. Interval] ------------------------------------ year.8559. 7...68.68998 _cons.59.7788 9...9878.796 Nonlinear Relationships Page

In this case, it didn t make a whole lot of difference in the estimates, but it might matter more if there were some values for income. We can also plot the results. The original values of income, rather than the logged values, are used in the graph. As you can see, the distance between each point keeps getting bigger and bigger (i.e. the growth fit curve keeps getting steeper and steeper), which is what you expect with exponential growth. With curvefit we use function (growth model).. curvefit inc year, f( ) Curve Estimation between inc and year ------------------------------------------ Variable Growth b _cons -5.676.599 -.6 9..5. b _cons.7.85597 7.96 7.6.. Statistics N r_a.689588.96958 ------------------------------------------ legend: b/t/p 5 5 Curve fit for inc Fit Growth Fit year A. Exponential models Power Models. Another common exponential function, especially popular in economics, is which, when you log each side, becomes Y = αx β ε lny = ln α + β(ln X ) + lnε Nonlinear Relationships Page

Ergo, to estimate this model, you can compute new variables that equal ln Y and ln X (or just compute ln X and then estimate a glm with link log). This model says that every % increase in X is associated with a β percentage change in E(Y), e.g. if β =, a % increase in X will produce a % increase in Y. Economists generally refer to the percentage change E(Y) associated with a % increase in X as the elasticity of E(Y) with respect to X. [NOTE: curvefit calls this particular model a power model.]. Piecewise regression/switching regression models. Suppose we think that a variable has one linear effect within a certain range of its values, but a different linear effect at a different range. For example, we might think that each additional year of elementary school education is worth $5,, and each year of college education is worth $8,, i.e. all years of education are not equally valuable. Piecewise regression models and the more general switching regression models provide a means for dealing with this. A piecewise regression model allows for changes in slope, with the restriction that the line being estimated be continuous; that is, it consists of two or more straight line segments. The true model is continuous, with a structural break. At the point of the structural break, the slope becomes steeper, but the line remains continuous. The data might follow a pattern such as the following: A switching regression model is similar, except that both the intercept and slope can change at the time of the structural break; the regression line need not be continuous. For example, Here, both the slope and the intercept change at the time of the structural break, and the line is no longer continuous. In the case of education, this might occur because of some sort of certification effect; e.g. you get a bonus just for having some college. With both piecewise and switching regressions, the key is to figure out where the meaningful split points are. You also don t want to do this indiscriminately, as with a large sample, it can be fairly easy to come up with statistically significant but substantively trivial deviations from linearity. When the breakpoints are not known, more advanced techniques can be used to estimate them and the parameters of the model. Pindyck and Rubinfeld discuss these models further. Nonlinear Relationships Page

Stata Example. The mkspline command makes it easy to estimate piecewise regression models.. use https://www.nd.edu/~rwilliam/statafiles/blwh.dta, clear. mkspline educ educ = educ, marginal. reg income educ educ Source SS df MS Number of obs = 5 -- F(, 97) = 68.7 Model 866.6998.99 Prob > F =. Residual 58.595 97.76559 R-squared =.7 -- Adj R-squared =.7 Total 8.9 99 8.5556 Root MSE =.8 income Coef. Std. Err. t P> t [95% Conf. Interval] ------------------------------------ educ.968.97 8.6..6988.988 educ.5995.657 9.68..796.98 _cons.6.678.6..5.6996 In the above, we are allowing for education to have one effect for grades - (reflected by educ), and a different effect at higher grades (educ). The marginal option specifies that the new variables are to be constructed so that, when used in estimation, the coefficients represent the change in the slope from the preceding interval. A key advantage of this is that it makes it possible to test whether the change in slope is significant, i.e. if the effect of educ is not significant then the effect of education does not change after the break point. The above tells us that each of the first years of education produces an additional $96 in average income. For years +, the effect of each year is about $,6 greater, or about $,56 altogether. The T value for educ tells us the difference in effects across years is statisticially significant, i.e. college years produce greater increases in income than do earlier years of schooling. However, the default is to construct the variables so that the coefficients will measure the slopes for the intervals rather than the difference in the slopes. So, if you don t use marginal, you get. mkspline educ educ = educ. reg income educ educ Source SS df MS Number of obs = 5 -- F(, 97) = 68.7 Model 866.6998.99 Prob > F =. Residual 58.595 97.76559 R-squared =.7 -- Adj R-squared =.7 Total 8.9 99 8.5556 Root MSE =.8 income Coef. Std. Err. t P> t [95% Conf. Interval] ------------------------------------ Educ.968.97 8.6..6988.988 Educ.55979.887 8.7..5.67957 _cons.6.678.6..5.6996 Nonlinear Relationships Page 5

Personally, I do not like this latter approach as well, since it doesn t tell you whether the effects of education significantly differ after the break point or not. But, a simple test command will give you that information:. test educ=educ ( ) educ - educ = F(, 97) = 9.75 Prob > F =. Note that the square root of the F value, 9.68, is the same as the T value for educ in the previous regression. Hence, it is largely a matter of personal preference whether you use the marginal option or not. As far as I know, curvefit can t graph something like this, but here is an alternative approach (see Appendix A for more details). use "https://www.nd.edu/~rwilliam/statafiles/blwh.dta", clear. mkspline educ educ = educ, marginal. quietly reg income educ educ. predict spline (option xb assumed; fitted values). scatter income educ line spline educ, sort scheme(sj) 5 5 5 educ income Fitted values Note: If you want to allow for a different intercept, you can do something like. use "https://www.nd.edu/~rwilliam/statafiles/blwh.dta", clear. mkspline educ educ = educ, marginal. gen int = educ >. reg income educ educ int Source SS df MS Number of obs = 5 -- F(, 96) = 5.65 Model 98.677 986.9 Prob > F =. Residual 7.5776 96.686 R-squared =.79 -- Adj R-squared =.7 Total 8.9 99 8.5556 Root MSE =.657 income Coef. Std. Err. t P> t [95% Conf. Interval] ------------------------------------ Nonlinear Relationships Page 6

educ.566.75.5..989.7869 educ.8677.6555.5..5576.85689 int.868.68759-6.. -5.56.77 _cons.798.5886 9.. 8.55.78. predict spline (option xb assumed; fitted values). scatter income educ line spline educ, sort scheme(sj) 5 5 5 educ income Fitted values Closing Comments. Visual inspection and empirical tests can often be inconclusive in determining which nonlinear transformation is best. For example, both an exponential model and a piecewise regression model can appear to be consistent with the data: YEXP - Exponential - X Remember, too, the presence of random error terms will cause the observed data to not show as clear of relationships as we have depicted here. In the end, theoretical concerns need to guide you in determining which transformations are most appropriate for the data. There are lots of other transformations that can be useful. For example, rather than use a log transformation, it is sometimes useful to use the cube root of a variable instead. Unlike the log, a cube root transformation can deal with and negative values. As of this writing (February, 5), other userful references include http://fmwww.bc.edu/repec/bocode/t/transint.html (Very good) Nonlinear Relationships Page 7

http://www.ats.ucla.edu/stat/stata/faq/piecewise.htm Nonlinear Relationships Page 8

Appendix A: Graphing Nonlinear Relationships with Stata Stata has several ways to graph nonlinear relationships involving a single Y and a single X. Use the built-in functions of the twoway command. With twoway, you can easily plot the observed values, the linear fitted values (X only), and the quadratic fitted values (X and X ). Further, you can combine all these in a single graph if you want. You just need to use the lfit and the qfit options. Example:. use "https://www.nd.edu/~rwilliam/statafiles/nonlin.dta", clear. twoway scatter ypp x lfit ypp x qfit ypp x, scheme(sj) -5 5 5 x ypp Fitted values Fitted values Use the user-written curvefit command. Liu Wei s curvefit command, available from SSC, produces curve estimation regression statistics and related plots between two variables for 5 different curve estimation regression models. Different plots can be combined. curvefit doesn t give you much direct control over the appearance of the graph, but you can always edit if you want (e.g. you can change to a different scheme like sj). Look at the help file to get the codes for the functions you want. To reproduce the above graph, we want functions (linear) and (quadratic).. curvefit ypp x, f( ) Curve fit for ypp -5 5 5 Fit Fit x Nonlinear Relationships Page 9

To add an X term, we use functions h (nth order Polynomial) and option count (set order of model 'nth order Polynomial'). Example:. curvefit yppn x, f( h) c() Curve fit for yppn - Fit Fit order Polynomial Fit x To fit an exponential/growth model, use function.. use "https://www.nd.edu/~rwilliam/statafiles/nonlinln.dta", clear. curvefit inc year, f() 5 5 Curve fit for inc Growth Fit year Estimate the models and use the predicted values in the plots. This approach can be especially useful if you have an odd graph that isn t easily plotted via other commands. To reproduce our graph that had X in it,. use "https://www.nd.edu/~rwilliam/statafiles/nonlin.dta", clear. quietly reg yppn x. predict linear (option xb assumed; fitted values). quietly reg yppn x c.x#c.x. predict quadratic (option xb assumed; fitted values). quietly reg yppn x c.x#c.x c.x#c.x#c.x. predict cubic (option xb assumed; fitted values). twoway scatter yppn x line linear x line quadratic x line cubic x, scheme(sj) Nonlinear Relationships Page

- x yppn Fitted values Fitted values Fitted values For an unusual graph like our spline functions (use the sort option if data are not already sorted by x). use "https://www.nd.edu/~rwilliam/statafiles/blwh.dta", clear. mkspline educ educ = educ, marginal. reg income educ educ Source SS df MS Number of obs = 5 -- F(, 97) = 68.7 Model 866.6998.99 Prob > F =. Residual 58.595 97.76559 R-squared =.7 -- Adj R-squared =.7 Total 8.9 99 8.5556 Root MSE =.8 income Coef. Std. Err. t P> t [95% Conf. Interval] ------------------------------------ educ.968.97 8.6..6988.988 educ.5995.657 9.68..796.98 _cons.6.678.6..5.6996. predict spline (option xb assumed; fitted values). scatter income educ line spline educ, sort scheme(sj) 5 5 5 educ income Fitted values Nonlinear Relationships Page

Appendix B (Optional): The underlying math for piecewise regression. The piecewise regression model can be written as E( Y) = α + β X + β [( X structural_ break_ value) * Breakdummy] where breakdummy = if X is greater than the structural break value, otherwise. Note that the main effect of breakdummy is NOT included in the model; this implies that the intercept is the same both before and after the structural break. So, in the case of education, the structural break value would be. Those with years of education or less would be coded on the dummy variable (and the interaction), and those with more than years of education would be coded on the dummy. On the interaction term, their value would be [years of education - ]. The switching regression model can be written as E( Y) = α + β X + β Breakdummy + β [( X structural_ break_ value) * Breakdummy] Both of the above correspond to the coding used by the marginal option of Stata s mkspline command. As noted in the Stata example, you can reparameterize these depending on whether you d rather have the coefficients represent the slope of the interval or the change in the slope from the preceding interval. A listing of the first cases in the data set makes clear how Stata has computed the variables:. list educ educ educ educ educ in / +--------------------------------------+ educ educ educ educ educ --------------------------------------... 8 8 8. 8 8 8 5. 8 8 8 -------------------------------------- 6. 7. 8. 9.. --------------------------------------.... 5. 5 5 -------------------------------------- 6. 5 5 7. 6 6 8. 6 6 9. 7 7 5 5. 9 9 +--------------------------------------+ Nonlinear Relationships Page

As we see, when the marginal parameter is specified, educ = educ, while educ = max(, educ ). Hence, the slope for educ shows you the difference in effects between the first years of education and any later years. When the marginal parameter is not specified, educ = min(educ, ) and educ = max(, educ ). Hence, the slope for educ shows you the effect for each additional year of education after year. Nonlinear Relationships Page