Soc 63993, Homework #7 Answer Key: Nonlinear effects/ Intro to path analysis

Similar documents
Nonlinear relationships Richard Williams, University of Notre Dame, Last revised February 20, 2015

Sociology 63993, Exam 2 Answer Key [DRAFT] March 27, 2015 Richard Williams, University of Notre Dame,

Sociology Exam 2 Answer Key March 30, 2012

S o c i o l o g y E x a m 2 A n s w e r K e y - D R A F T M a r c h 2 7,

Nonrecursive Models Highlights Richard Williams, University of Notre Dame, Last revised April 6, 2015

Specification Error: Omitted and Extraneous Variables

Case of single exogenous (iv) variable (with single or multiple mediators) iv à med à dv. = β 0. iv i. med i + α 1

Interaction effects between continuous variables (Optional)

Group Comparisons: Differences in Composition Versus Differences in Models and Effects

Lab 6 - Simple Regression

Sociology 593 Exam 2 Answer Key March 28, 2002

Multicollinearity Richard Williams, University of Notre Dame, Last revised January 13, 2015

1 A Review of Correlation and Regression

Week 3: Simple Linear Regression

Nonrecursive models (Extended Version) Richard Williams, University of Notre Dame, Last revised April 6, 2015

Generalized linear models

STATISTICS 110/201 PRACTICE FINAL EXAM

Practice exam questions

Answer all questions from part I. Answer two question from part II.a, and one question from part II.b.

THE MULTIVARIATE LINEAR REGRESSION MODEL

Lab 3: Two levels Poisson models (taken from Multilevel and Longitudinal Modeling Using Stata, p )

Mediation Analysis: OLS vs. SUR vs. 3SLS Note by Hubert Gatignon July 7, 2013, updated November 15, 2013

Equation Number 1 Dependent Variable.. Y W's Childbearing expectations

4 Instrumental Variables Single endogenous variable One continuous instrument. 2

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, Last revised January 20, 2018

Lecture 4: Multivariate Regression, Part 2

Outline. Linear OLS Models vs: Linear Marginal Models Linear Conditional Models. Random Intercepts Random Intercepts & Slopes

Lecture 4: Multivariate Regression, Part 2

sociology sociology Scatterplots Quantitative Research Methods: Introduction to correlation and regression Age vs Income

Empirical Application of Simple Regression (Chapter 2)

Final Exam. Question 1 (20 points) 2 (25 points) 3 (30 points) 4 (25 points) 5 (10 points) 6 (40 points) Total (150 points) Bonus question (10)

Lecture 3: Multiple Regression. Prof. Sharyn O Halloran Sustainable Development U9611 Econometrics II

4 Instrumental Variables Single endogenous variable One continuous instrument. 2

Control Function and Related Methods: Nonlinear Models

Lecture 3.1 Basic Logistic LDA

Handout 12. Endogeneity & Simultaneous Equation Models

Lecture notes to Stock and Watson chapter 8

ESTIMATING AVERAGE TREATMENT EFFECTS: REGRESSION DISCONTINUITY DESIGNS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

5. Let W follow a normal distribution with mean of μ and the variance of 1. Then, the pdf of W is

Interpreting coefficients for transformed variables

1 Warm-Up: 2 Adjusted R 2. Introductory Applied Econometrics EEP/IAS 118 Spring Sylvan Herskowitz Section #

Essential of Simple regression

Sociology Exam 1 Answer Key Revised February 26, 2007

Acknowledgements. Outline. Marie Diener-West. ICTR Leadership / Team INTRODUCTION TO CLINICAL RESEARCH. Introduction to Linear Regression

Lab 07 Introduction to Econometrics

Using the same data as before, here is part of the output we get in Stata when we do a logistic regression of Grade on Gpa, Tuce and Psi.

Comparing Logit & Probit Coefficients Between Nested Models (Old Version)

Can you tell the relationship between students SAT scores and their college grades?

ECO220Y Simple Regression: Testing the Slope

ECON3150/4150 Spring 2016

Lab 10 - Binary Variables

Binary Dependent Variables

Problem Set #3-Key. wage Coef. Std. Err. t P> t [95% Conf. Interval]

General Linear Model (Chapter 4)

Lecture#12. Instrumental variables regression Causal parameters III

Statistical Inference with Regression Analysis

Immigration attitudes (opposes immigration or supports it) it may seriously misestimate the magnitude of the effects of IVs

Correlation and regression. Correlation and regression analysis. Measures of association. Why bother? Positive linear relationship

Lecture 3 Linear random intercept models

Problem Set 10: Panel Data

At this point, if you ve done everything correctly, you should have data that looks something like:

Problem Set 1 ANSWERS

ECON 497 Final Exam Page 1 of 12

Analyzing Proportions

Sociology 593 Exam 2 March 28, 2002

Nonlinear Regression Functions

Systematic error, of course, can produce either an upward or downward bias.

Rockefeller College University at Albany

Lecture (chapter 13): Association between variables measured at the interval-ratio level

BIOSTATS 640 Spring 2018 Unit 2. Regression and Correlation (Part 1 of 2) STATA Users

Correlation. A statistics method to measure the relationship between two variables. Three characteristics

Applied Statistics and Econometrics

Section Least Squares Regression

Measurement Error. Often a data set will contain imperfect measures of the data we would ideally like.

Monday 7 th Febraury 2005

Econometrics Midterm Examination Answers

Correlation and Simple Linear Regression

sociology 362 regression

1 Independent Practice: Hypothesis tests for one parameter:

Econometrics Homework 1

Lecture Outline. Biost 518 Applied Biostatistics II. Choice of Model for Analysis. Choice of Model. Choice of Model. Lecture 10: Multiple Regression:

Statistical Modelling in Stata 5: Linear Models

Lecture 7: OLS with qualitative information

Stata tip 63: Modeling proportions

Problem Set #5-Key Sonoma State University Dr. Cuellar Economics 317- Introduction to Econometrics

Ordinal Independent Variables Richard Williams, University of Notre Dame, Last revised April 9, 2017

Appendix A. Numeric example of Dimick Staiger Estimator and comparison between Dimick-Staiger Estimator and Hierarchical Poisson Estimator

Linear Modelling in Stata Session 6: Further Topics in Linear Modelling

Warwick Economics Summer School Topics in Microeconometrics Instrumental Variables Estimation

Econometrics I KS. Module 1: Bivariate Linear Regression. Alexander Ahammer. This version: March 12, 2018

Section I. Define or explain the following terms (3 points each) 1. centered vs. uncentered 2 R - 2. Frisch theorem -

Dealing With and Understanding Endogeneity

Handout 11: Measurement Error

Question 1a 1b 1c 1d 1e 2a 2b 2c 2d 2e 2f 3a 3b 3c 3d 3e 3f M ult: choice Points

Lecture 24: Partial correlation, multiple regression, and correlation

SOCY5601 Handout 8, Fall DETECTING CURVILINEARITY (continued) CONDITIONAL EFFECTS PLOTS

The Simulation Extrapolation Method for Fitting Generalized Linear Models with Additive Measurement Error

Linear Regression with Multiple Regressors

Heteroskedasticity Richard Williams, University of Notre Dame, Last revised January 30, 2015

Autocorrelation. Think of autocorrelation as signifying a systematic relationship between the residuals measured at different points in time

Transcription:

Soc 63993, Homework #7 Answer Key: Nonlinear effects/ Intro to path analysis Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Problem 1. The files nonlinhw.do and nonlinhw.dta will generate the computer runs you need for this problem. Copy them from the course web page. You will also need to install curvefit, available from SSC. (You will need to refer to curvefit s help file so you know what the functions are.) Run the program a few lines at a time; otherwise you will always be erasing your graphs. There are 4 variables in nonlinhw.dta: X1 (the IV), Y1, Y2, and Y3 (the DVs). The Stata program does scatterplots of X1 versus each DV and then generates other graphs that model the nonlinear relationship. For each DV in turn, you are to do the following: Examine each scatterplot. Explain why the relationship is nonlinear and what type of nonlinearity appears to be present. Put another way, explain the rationale for the followup graph of the nonlinear relationship. o o o For Part I only, show a different set of Stata commands that could graph the nonlinear relationship. For Parts I and II only, show how the same models could be estimated using the regress and/or glm commands. For Y3 only, two different Curvefits are presented (Parts III and IV). Explain why, based on the graphics only, it would be difficult to decide which nonlinear specification was most appropriate, and how theory might help you to choose. Discuss what problems result from a linear (mis)specification. The graphs will help you here. For Parts I, II, III, present a substantive example, real or hypothetical, that the model you have estimated might be appropriate for. Explain why it is appropriate. Do not use any of the examples already given in class. First off, here is nonlinhw.do: version 12.1 use https://www3.nd.edu/~rwilliam/statafiles/nonlinhw.dta, clear ********** Part 1. * Plot of with y1 estimates clear scatter y1, scheme(sj) curvefit y1, f(1 4) * HW: Show another way to graph this model * HW: Show how to estimate this model using regress and/or glm ********** Part 2. * Plot of with y2 estimates clear scatter y2 curvefit y2, f(1 0) * HW: Show how to estimate this model using regress and/or glm ********** Part 3. * Plot of with y3. estimates clear scatter y3 mkspline xle0 0 xgt0 =, marginal reg y3 predict linear reg y3 xle0 xgt0 predict spline scatter y3 line linear line spline, sort scheme(sj) ******** Part 4. * As this shows, a polynomial model would also be plausible for y3. * In practice, it is often hard to tell just from the scatterplot what * transformation is best, so theory is important. estimates clear curvefit y3, f(1 4) Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 1

The scatterplot of X1 with Y1 is. use https://www3.nd.edu/~rwilliam/statafiles/nonlinhw.dta, clear. ********** Part 1.. * Plot of with y1. estimates clear. scatter y1, scheme(sj) y1 2 4 6 8 10 The U-shaped, curvilinear form suggests that a polynomial model is called for. There appears to be one bend so the model should include terms for X1 and X1 2. The curvefit command estimates the linear and quadratic models and generates the graph.. curvefit y1, f(1 4) Curve Estimation between y1 and ------------------------------------------ Variable Linear Quadratic b0 _cons 4.5251426 2.966442 24.71 78.56 0.0000 0.0000 b1 _cons 1.00287 1.00287 9.64 70.16 0.0000 0.0000 b2 _cons.50280663 55.37 0.0000 Statistics N 61 61 r2_a.60516076.99254275 ------------------------------------------ legend: b/t/p Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 2

Curve fit for y1 2 4 6 8 10 y1 (Observed) Linear Fit Quadratic Fit As we can see, the linear model (where Y1 is regressed only on X1) at first underestimates several values of y1, then overestimates, then goes back to underestimating them. By way of contrast, the quadratic model (where Y1 is regressed on X1 and X1 2 ) matches the observed data almost perfectly. We can also graph the relationship using these Stata commands:. scatter y1 lfit y1 qfit y1 2 4 6 8 10 y1 Fitted values Fitted values To estimate the quadratic model using the regress command (which gives the same estimates that curvefit did), Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 3

. reg y1 c.#c. Source SS df MS Number of obs = 61 -- F( 2, 58) = 3993.93 Model 308.65331 2 154.326655 Prob > F = 0.0000 Residual 2.24113791 58.038640309 R-squared = 0.9928 -- Adj R-squared = 0.9925 Total 310.894448 60 5.18157413 Root MSE =.19657 y1 Coef. Std. Err. t P> t [95% Conf. Interval] 1.00287.0142947 70.16 0.000.9742561 1.031484 c.#c..5028066.0090808 55.37 0.000.4846294.5209838 _cons 2.966442.037761 78.56 0.000 2.890855 3.042029 Such a model might explain the link between political ideology and political activism: the more extreme one is in his or her political views (in either direction), the more likely he or she is to be politically active. Those in the middle of the road are least active. The tendency is more pronounced for right-wing ideologues than for left-wing. For Y2, the plot with X1 is. ********** Part 2.. * Plot of with y2. estimates clear. scatter y2 y2 0 200 400 600 800 This suggests exponential growth. The points increase slowly at first, and then grow by larger and larger amounts. Plotting a linear and exponential model with curvefit, Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 4

. curvefit y2, f(1 0) Curve Estimation between y2 and ------------------------------------------ Variable Linear Growth b0 _cons 83.005824 2.083302 5.34 8.83 0.0000 0.0000 b1 _cons 64.366035 1.496315 7.29 17.34 0.0000 0.0000 Statistics N 61 61 r2_a.46518882.9519786 ------------------------------------------ legend: b/t/p -200 0 200 400 600 800 Curve fit for y2 y2 (Observed) Linear Fit Growth Fit As curvefit shows, if we just use a linear model with Y2 dependent, we will first underestimate the values of y2, then over-estimate them, then go back to underestimating them again. We could estimate a model where we computed the log of y2 and regressed it on x. The potential problem with this approach is that the log of 0 is undefined; ergo, any cases with 0 (or for that matter negative) values will get dropped from the analysis. Further, most of us don t think in terms of logs of variables; we would rather see how X is related to the unlogged Y. It is therefore often better to estimate this model: EY ( ) = e α βx ( + ) When you do this, Y itself can equal 0; all that is required is that its expected value be greater than zero. In Stata, we can estimate this as a generalized linear model with link log. The command and results are Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 5

. glm y2, link(log) Generalized linear models No. of obs = 61 Optimization : ML Residual df = 59 Scale parameter = 1631.748 Deviance = 96273.1221 (1/df) Deviance = 1631.748 Pearson = 96273.1221 (1/df) Pearson = 1631.748 Variance function: V(u) = 1 [Gaussian] Link function : g(u) = ln(u) [Log] AIC = 10.26752 Log likelihood = -311.1594035 BIC = 96030.58 OIM y2 Coef. Std. Err. z P> z [95% Conf. Interval] 1.496316.0824864 18.14 0.000 1.334645 1.657986 _cons 2.0833.2256518 9.23 0.000 1.641031 2.52557 Exponential/Growth models are good for variables such as income or sales that we expect to increase by percentage rather than absolute amounts. For example, we might predict that each dollar increase in cost would reduce sales by 5%. For Y3, the plot is. ********** Part 3.. * Plot of with y3.. estimates clear. scatter y3 y3 0 5 10 15 20 25 Here, there seem to be two pieces to the data. For X < 0, there is a small slope. Then, the slope becomes much greater. Ergo, a piecewise regression seems called for, and the mkspline command will make that possible:. mkspline xle0 0 xgt0 =, marginal. reg y3 Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 6

Source SS df MS Number of obs = 61 -- F( 1, 59) = 386.41 Model 3852.18882 1 3852.18882 Prob > F = 0.0000 Residual 588.18379 59 9.96921678 R-squared = 0.8675 -- Adj R-squared = 0.8653 Total 4440.37261 60 74.0062102 Root MSE = 3.1574 y3 Coef. Std. Err. t P> t [95% Conf. Interval] 4.513444.2296068 19.66 0.000 4.054001 4.972886 _cons 6.329963.4042645 15.66 0.000 5.521032 7.138895. predict linear (option xb assumed; fitted values) The linear model is a pretty good fit, but we can do better.. reg y3 xle0 xgt0 Source SS df MS Number of obs = 61 -- F( 2, 58) =41949.15 Model 4437.30505 2 2218.65252 Prob > F = 0.0000 Residual 3.06756763 58.052889097 R-squared = 0.9993 -- Adj R-squared = 0.9993 Total 4440.37261 60 74.0062102 Root MSE =.22998 y3 Coef. Std. Err. t P> t [95% Conf. Interval] xle0.9967843.0373837 26.66 0.000.9219526 1.071616 xgt0 7.033319.0668686 105.18 0.000 6.899467 7.167171 _cons.968499.0588672 16.45 0.000.8506636 1.086334. predict spline (option xb assumed; fitted values). scatter y3 line linear line spline, sort scheme(sj) -10 0 10 20 30 y3 Fitted values Fitted values As the graph shows, if we just estimate a linear model, we will first underestimate y3, then overestimate, then underestimate again.the fit of the piecewise model is near perfect in this case. Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 7

Substantive example: Years of college education may have much more impact on earnings than years of elementary education. Finally, we present an alternative curvefit for Y3. Instead of doing piecewise regression, we fit a quadratic model:. estimates clear. curvefit y3, f(1 4) Curve Estimation between y3 and ------------------------------------------ Variable Linear Quadratic b0 _cons 6.3299633 2.9758679 15.66 18.74 0.0000 0.0000 b1 _cons 4.5134436 4.5134436 19.66 75.09 0.0000 0.0000 b2 _cons 1.0819662 28.33 0.0000 Statistics N 61 61 r2_a.86529216.99076762 ------------------------------------------ legend: b/t/p Curve fit for y3-10 0 10 20 30 y3 (Observed) Linear Fit Quadratic Fit Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 8

As we see, the quadratic model also seems to provide a very good fit to the observed data. Unless relationships are extremely strong, plots of the data may reveal that nonlinearity is present, but won t necessarily make it obvious what the best solution is. Theory should guide you as you attempt to determine what solution is most appropriate. Problem 2. A sociologist believes that the following model describes the relationships between X1, X2, X3 and X4. All variables are in standardized form. The hypothesized value of each path is included in the diagram. u X2 -.7 X4 w.4.5.3 X1 X3 v a. Write out the structural equation for each endogenous variable, using both the names for the paths (e.g. β42) and the estimated value of the path coefficient. X X X 2 3 4 = β21x = β32x = β X 42 1 2 2 + u =.4X1 + u + v =.5X 2 + v + β X + w =.7X 2 +.3X 3 + w 43 3 b. Part of the correlation matrix is shown below. Determine the complete correlation matrix. (Remember, variables are standardized. You can use either normal equations or Sewell Wright, but you might want to use both as a doublecheck.) x2 x3 x4 -------- 1.0000 x2 0.4000 1.0000 x3?? 1.0000 x4-0.2200?? 1.0000 Here is the complete correlation matrix: x2 x3 x4 -------- 1.0000 x2 0.4000 1.0000 x3 0.2000 0.5000 1.0000 x4-0.2200-0.5500-0.0500 1.0000 To confirm, using normal equations (in this case though, it may be easier just to look at the diagram and use Sewell Wright): Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 9

ρ31 = β31 + β21β32 = 0 +.4*.5 =.20 ρ32 = β32 + β31β21 =.5 + 0*.4 =.5 ρ42 = β42 + β32β43 + β41β21 + β43β31β21 =.7 +.5*.3 + 0*.4 +.3*0*.4 =.55 ρ43 = β43 + β41β31 + β42β32 + β41β21β32 + β42β21β31 =.3+ 0*0 +.7*.5 + 0*.4*.5 +.7*.4*0 =.05.3 0 -.35 c. (5 pts) Decompose the correlation between X3 and X4 into Correlation due to direct effects Correlation due to indirect effects Correlation due to common causes d. Suppose the above model is correct, but instead the researcher believed in and estimated the following model: X3 X4 w What conclusions would the researcher likely draw? In particular, what would the researcher conclude about the effect of changes in X3 on X4? Why would he make these mistakes? Discuss the consequences of this mis-specification. In the mis-specified model, the standardized coefficient will be the same as the bivariate correlation, -.05. The researcher will therefore conclude that increases in X3 lead to decreases in X4. However, in the correctly specified model, we see that the direct effect of X3 on X4 is.3, i.e. increases in X3 lead to increases in X4. By failing to take into account the common cause, X2, the research will not only mis-estimate the magnitude but also the direction of the effect of X3 on X4. If policy issues were involved, policy makers might do exactly the opposite of what they should do. e. [Optional] Confirm your answer to 2b using Stata, i.e. create a pseudo-replication of the data using corr2data and then use one of the methods described in the notes for making sure that you can reproduce the estimates of the path coefficients given in the diagram. Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 10

We can use pathreg or sem (pathreg from UCLA must be installed),. matrix input corr = (1,.4,.2,-.22\.4,1,.5,-.55\.2,.5,1,-.05\-.22,-.55,-.05,1). corr2data x2 x3 x4, n(100) corr(corr) (obs 100). pathreg (x2 ) (x3 x2) (x4 x2 x3) x2 Coef. Std. Err. t P> t Beta.4.092582 4.32 0.000.4 _cons 3.07e-09.0921179 0.00 1.000. n = 100 R2 = 0.1600 sqrt(1 - R2) = 0.9165 x3 Coef. Std. Err. t P> t Beta 4.27e-09.0959412 0.00 1.000 4.27e-09 x2.5.0959412 5.21 0.000.5 _cons 1.36e-09.0874908 0.00 1.000. n = 100 R2 = 0.2500 sqrt(1 - R2) = 0.8660 x4 Coef. Std. Err. t P> t Beta -8.76e-09.0883883-0.00 1.000-8.76e-09 x2 -.7.1-7.00 0.000 -.7 x3.3.0935414 3.21 0.002.3 _cons -8.66e-09.0806032-0.00 1.000. n = 100 R2 = 0.3700 sqrt(1 - R2) = 0.7937. sem (x2 <- ) (x3 <- x2) (x4 <- x2 x3) Endogenous variables Observed: x2 x3 x4 Exogenous variables Observed: Fitting target model: Structural equation model Number of obs = 100 Estimation method = ml Log likelihood = -519.3618 Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 11

OIM Coef. Std. Err. z P> z [95% Conf. Interval] Structural x2 <-.4.0916515 4.36 0.000.2203663.5796337 _cons 3.07e-09.0911921 0.00 1.000 -.1787332.1787332 x3 <- x2.5.0944911 5.29 0.000.3148008.6851992 4.27e-09.0944911 0.00 1.000 -.1851992.1851992 _cons 1.36e-09.0861684 0.00 1.000 -.168887.168887 x4 <- x2 -.7.0979796-7.14 0.000 -.8920364 -.5079635 x3.3.0916515 3.27 0.001.1203663.4796337-8.76e-09.0866025-0.00 1.000 -.1697379.1697379 _cons -8.66e-09.0789747-0.00 1.000 -.1547875.1547875 Variance e.x2.8316.117606.6302842 1.097217 e.x3.7425.1050054.5627537.9796581 e.x4.6237.0882045.4727131.8229128 LR test of model vs. saturated: chi2(0) = 0.00, Prob > chi2 =. The following post-estimation command after sem can confirm our estimates of direct, indirect and total effects (but not correlation due to common cause, alas): Direct effects OIM Coef. Std. Err. z P> z [95% Conf. Interval] Structural x2 <-.4.0916515 4.36 0.000.2203663.5796337 x3 <- x2.5.0944911 5.29 0.000.3148008.6851992 4.27e-09.0944911 0.00 1.000 -.1851992.1851992 x4 <- x2 -.7.0979796-7.14 0.000 -.8920364 -.5079635 x3.3.0916515 3.27 0.001.1203663.4796337-8.76e-09.0866025-0.00 1.000 -.1697379.1697379 Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 12

Indirect effects OIM Coef. Std. Err. z P> z [95% Conf. Interval] Structural x2 <- 0 (no path) x3 <- x2 0 (no path).2.0594018 3.37 0.001.0835746.3164253 x4 <- x2.15.0283473 5.29 0.000.0944402.2055598 x3 0 (no path) -.22.066453-3.31 0.001 -.3502455 -.0897545 Total effects OIM Coef. Std. Err. z P> z [95% Conf. Interval] Structural x2 <-.4.0916515 4.36 0.000.2203663.5796337 x3 <- x2.5.0944911 5.29 0.000.3148008.6851992.2.0979796 2.04 0.041.0079635.3920365 x4 <- x2 -.55.1019979-5.39 0.000 -.7499122 -.3500878 x3.3.0916515 3.27 0.001.1203663.4796337 -.22.09755-2.26 0.024 -.4111945 -.0288055 The complete Stata code for problem 2 is version 12.1 * Homework 7, Path analysis problem clear all matrix input corr = (1,.4,.2,-.22\.4,1,.5,-.55\.2,.5,1,-.05\-.22,-.55,-.05,1) corr2data x2 x3 x4, n(100) corr(corr) corr pathreg (x2 ) (x3 x2) (x4 x2 x3) sem (x2 <- ) (x3 <- x2) (x4 <- x2 x3) estat teffects Homework #7 Answer key Nonlinear effects/intro to Path Analysis Page 13