Econ 371 Problem Set #6 Answer Sheet. deaths per 10,000. The 90% confidence interval for the change in death rate is 1.81 ±

Similar documents
Econ 371 Problem Set #6 Answer Sheet In this first question, you are asked to consider the following equation:

Econometrics Problem Set 10

Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 10

ECON Introductory Econometrics. Lecture 11: Binary dependent variables

Chapter 11. Regression with a Binary Dependent Variable

Panel Data. STAT-S-301 Exercise session 5. November 10th, vary across entities but not over time. could cause omitted variable bias if omitted

Homework Solutions Applied Logistic Regression

2. We care about proportion for categorical variable, but average for numerical one.

Binary Dependent Variable. Regression with a

Applied Economics. Regression with a Binary Dependent Variable. Department of Economics Universidad Carlos III de Madrid

Binary Dependent Variables

Regression with a Binary Dependent Variable (SW Ch. 9)

Exam ECON3150/4150: Introductory Econometrics. 18 May 2016; 09:00h-12.00h.

Empirical Application of Simple Regression (Chapter 2)

ECON3150/4150 Spring 2016

Problem Set 4 ANSWERS

Sociology 362 Data Exercise 6 Logistic Regression 2

Applied Statistics and Econometrics

Final Exam. Question 1 (20 points) 2 (25 points) 3 (30 points) 4 (25 points) 5 (10 points) 6 (40 points) Total (150 points) Bonus question (10)

Lab 07 Introduction to Econometrics

raise Coef. Std. Err. z P> z [95% Conf. Interval]

ECON Introductory Econometrics. Lecture 6: OLS with Multiple Regressors

Introduction to Econometrics

Problem set - Selection and Diff-in-Diff

5. Let W follow a normal distribution with mean of μ and the variance of 1. Then, the pdf of W is

Nonlinear Econometric Analysis (ECO 722) : Homework 2 Answers. (1 θ) if y i = 0. which can be written in an analytically more convenient way as

Practice 2SLS with Artificial Data Part 1

THE AUSTRALIAN NATIONAL UNIVERSITY. Second Semester Final Examination November, Econometrics II: Econometric Modelling (EMET 2008/6008)

Categorical Predictor Variables

Empirical Application of Panel Data Regression

Introduction to Econometrics. Regression with Panel Data

Warwick Economics Summer School Topics in Microeconometrics Instrumental Variables Estimation

WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, Academic Year Exam Version: A

University of California at Berkeley Fall Introductory Applied Econometrics Final examination. Scores add up to 125 points

Practice exam questions

Measurement Error. Often a data set will contain imperfect measures of the data we would ideally like.

Regression #8: Loose Ends

Heteroskedasticity Example

Econometrics Homework 4 Solutions

(a) Briefly discuss the advantage of using panel data in this situation rather than pure crosssections

Lab 11 - Heteroskedasticity

Lecture#12. Instrumental variables regression Causal parameters III

Econometrics Problem Set 4

ECON3150/4150 Spring 2016

Question 1 [17 points]: (ch 11)

2.1. Consider the following production function, known in the literature as the transcendental production function (TPF).

Problem Set 10: Panel Data

Exercise 7.4 [16 points]

Meta-Analysis in Stata, 2nd edition p.158 Exercise Silgay et al. (2004)

Quantitative Methods Final Exam (2017/1)

Lab 10 - Binary Variables

Handout 11: Measurement Error

ECON3150/4150 Spring 2015

Lecture 24: Partial correlation, multiple regression, and correlation

Applied Statistics and Econometrics

Estimating and Interpreting Effects for Nonlinear and Nonparametric Models

Lecture 4: Multivariate Regression, Part 2

Problem Set 5 ANSWERS

Lab 6 - Simple Regression

ECON Introductory Econometrics. Lecture 4: Linear Regression with One Regressor

i (x i x) 2 1 N i x i(y i y) Var(x) = P (x 1 x) Var(x)

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Problem Set #3-Key. wage Coef. Std. Err. t P> t [95% Conf. Interval]

Nonlinear Regression Functions

Handout 12. Endogeneity & Simultaneous Equation Models

Simultaneous Equations with Error Components. Mike Bronner Marko Ledic Anja Breitwieser

Lecture 4: Multivariate Regression, Part 2

Lecture 12: Effect modification, and confounding in logistic regression

Ecmt 675: Econometrics I

INTRODUCTION TO BASIC LINEAR REGRESSION MODEL

ECON Introductory Econometrics. Lecture 13: Internal and external validity

Applied Statistics and Econometrics

Course Econometrics I

Lecture 7: OLS with qualitative information

Interaction effects between continuous variables (Optional)

ECON Interactions and Dummies

Econometrics I Lecture 7: Dummy Variables

Applied Statistics and Econometrics

Question 1 carries a weight of 25%; Question 2 carries 20%; Question 3 carries 20%; Question 4 carries 35%.

Week 3: Simple Linear Regression

PhD/MA Econometrics Examination January 2012 PART A

WISE International Masters

ECON Introductory Econometrics. Lecture 5: OLS with One Regressor: Hypothesis Tests

Econometrics Problem Set 6

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, Last revised January 20, 2018

At this point, if you ve done everything correctly, you should have data that looks something like:

Appendix Table 1. Predictive Power of the Pre-Game Point Spread versus the Halftime Point Spread.

Introduction to Econometrics

Essential of Simple regression

Lecture 10: Introduction to Logistic Regression

Lecture 3.1 Basic Logistic LDA

Problem Set 1 ANSWERS

Introduction to Econometrics

2. (3.5) (iii) Simply drop one of the independent variables, say leisure: GP A = β 0 + β 1 study + β 2 sleep + β 3 work + u.

Longitudinal Data Analysis Using Stata Paul D. Allison, Ph.D. Upcoming Seminar: May 18-19, 2017, Chicago, Illinois

ESTIMATING AVERAGE TREATMENT EFFECTS: REGRESSION DISCONTINUITY DESIGNS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

Fixed and Random Effects Models: Vartanian, SW 683

From the help desk: Comparing areas under receiver operating characteristic curves from two or more probit or logit models

1 Linear Regression Analysis The Mincer Wage Equation Data Econometric Model Estimation... 11

Chapter 7. Hypothesis Tests and Confidence Intervals in Multiple Regression

Transcription:

Econ 371 Problem Set #6 Answer Sheet 10.1 This question focuses on the regression model results in Table 10.1. a. The first part of this question asks you to predict the number of lives that would be saved in New Jersey if the tax on a case of beer was increased by $1. With a $1 increase in the beer tax, the expected number of lives that would be saved is 0.45 per 10,000 people. Since New Jersey has a population of 8.1 million, the expected number of lives saved is 0.45 810 = 364.5. The 95% confidence interval is (0.45 ± 1.96 0.22) 810 = [15.228, 713.77]. b. When New Jersey lowers its drinking age from 21 to 18, the expected fatality rate increases by 0.028 deaths per 10,000. The 95% confidence interval for the change in death rate is 0.028 ± 1.96 0.066 = [ 0.1014, 0.1574]. With a population of 8.1 million, the number of fatalities will increase by 0.028 810 = 22.68 with a 95% confidence interval [ 0.1014, 0.1574] 810 = [ 82.134, 127.49]. c. When real income per capita in new Jersey increases by 1%, the expected fatality rate increases by 1.81 deaths per 10,000. The 90% confidence interval for the change in death rate is 1.81 ± 1.64 0.47 = [1.04, 2.58]. With a population of 8.1 million, the number of fatalities will increase by 1.81 810 = 1466.1 with a 90% confidence interval [1.04, 2.58] 810 = [840, 2092]. d. The low p-value (or high F -statistic) associated with the F -test on the assumption that time effects are zero suggests that the time effects should be included in the regression. e. The difference in the significance levels arises primarily because the estimated coefficient is higher in (5) than in (4). However, (5) leaves out two variables (unemployment rate and real income per capita) that are statistically significant. Thus, the estimated coefficient on Beer Tax in (5) may suffer from omitted variable bias. The results from (4) seem more reliable. In general, statistical significance should be used to measure reliability only if the regression is well-specified (no important omitted variable bias, correct functional form, no simultaneous causality or selection bias, and so forth.) f. In this case, you would want to define a binary variable west which equals 1 for the western states and 0 for the other states and then include the interaction term between the binary variable west and the unemployment rate (i.e., west (unemploymentrate)) in the regression equation corresponding to column (4). Suppose the coefficient associated with unemployment rate is β, and the coefficient associated with west (unemploymentrate) is γ. Then β captures the effect of the unemployment rate in the eastern states, and β + γ captures the effect of the unemployment rate in the western states. The difference in the effect of the unemployment rate in the western and eastern states is γ. Using the coefficient estimate (ˆγ) and the standard error (SE(ˆγ)) you can calculate the t-statistic to test whether γ is statistically significant at a given significance level. 10.4 This question focuses on the regression model described in equation (10.11). You are asked to describe the slope and intercept for different entities and times periods. Notice that in this model, the slope does not change across time periods, nor do any of the intercepts (i.e., this model does not have fixed time effects). The only thing that varies is the intercept, which is different for different entities. a. For Entity 1 in time Period 1, we have D2 1 = = Dn 1 = 0, so that the model reduces to: with an intercept of β 0 and a slope of β 1. Y 11 = β 0 + β 1 X 11 + u 11 (1) b. For Entity 1 in time Period 3, we still have D2 1 = = Dn 1 = 0, so that the model reduces to: with an intercept of β 0 and a slope of β 1. Y 13 = β 0 + β 1 X 13 + u 13 (2) c. For Entity 3 in time Period 1, we have D2 3 = D4 3 = = Dn 3 = 0 and D3 3 = 1 so that the model reduces to: Y 31 = β 0 + γ 3 + β 1 X 31 + u 31 (3) with an intercept of β 0 + γ 3 and a slope of β 1. 1

d. For Entity 3 in time Period 3, we still have D2 3 = D4 3 = = Dn 3 = 0 and D3 3 = 1 so that the model reduces to: Y 33 = β 0 + γ 3 + β 1 X 33 + u 33 (4) with an intercept of β 0 + γ 3 and a slope of β 1. 10.7 In this question, you are asked to comment on competing methods for estimating the effect of snow on traffic fatalities. a. The first method adds a regressor containing the average snow fall for each state (AverageSnow i ). The problem with this regressor is that average snow fall does not vary over time, and thus will be perfectly collinear with the state fixed effect. b. In the second approach, snowfall in each state and each year is used as a regressor. Since Snow it does vary with time, this method can be used along with state fixed effects. 11.6 This question focuses on the estimated probit model results in equation (11.8). a. You are first asked what the loan denial probability would be for a black applicant with a P/I Ratio of 0.35. In this case, we have P r[y i = 1 P/Iratio = 0.35, black = 1] = Φ( 2.26 + 2.74 0.35 + 0.71) = Φ( 0.59) = 27.76%. b. Now, your are asked how this probability changes if the P/I ratio is reduced to 0.30. Now we have P r[y i = 1 P/Iratio = 0.30, black = 1] = Φ( 2.26 + 2.74 0.30 + 0.71) = Φ( 0.73) = 23.27%. The difference in denial probabilities compared to (a) is 4.4 percentage points lower. c. In part (c), you are asked to repeat this exercise for a white loan applicant. In this case we have: and P r[y i = 1 P/Iratio = 0.35, black = 0] = Φ( 2.26 + 2.74 0.35 + 0.71 0) = 9.7% (5) P r[y i = 1 P/Iratio = 0.30, black = 0] = Φ( 2.26 + 2.74 0.30 + 0.71 0) = 7.5% (6) so that the change is only 2.2%. d. Finally, you are asked if the marginal effect of the P/I ratio on the probability of mortgage denial depend on race. From the results in parts (a)-(c), we can see that the marginal effect of the P/I ratio on the probability of mortgage denial depends on race. In the probit regression functional form, the marginal effect depends on the level of probability which in turn depends on the race of the applicant. The coefficient on black is statistically significant at the 1% level. 11.7 This question asks you to repeat the previous question, now using the logit model results in equation (11.10). In this case, we have: P r[y i = 1 P/Iratio = 0.35, black = 1] = Λ( 4.13 + 5.37 0.35 + 1.27 1) = 27.28% (7) P r[y i = 1 P/Iratio = 0.30, black = 1] = Λ( 4.13 + 5.37 0.30 + 1.27 1) = 22.29% (8) The difference in denial probabilities compared to (a) is 4.99 percentage points lower. P r[y i = 1 P/Iratio = 0.35, black = 0] = Λ( 4.13 + 5.37 0.35 + 1.27 0) = 9.53% (9) P r[y i = 1 P/Iratio = 0.30, black = 0] = Λ( 4.13 + 5.37 0.30 + 1.27 0) = 7.45% (10) so that the change is only 2.08%. The logit and probit results are similar. The two empirical exercises in this homework use the same dataset: Smoking. The data can be downloaded from the Web site listed in the assignment (which you can also reach from the class website). A program that carries all of the tasks for problems E11.1 and E11.2 is appended to this answer sheet. E11.1 This first question asks you to estimate various linear probability models for the smoking data set. a. This first question can be answered using the summarize command and the fact that SE(ˆp) = ˆσ Y N. Specifically, we have the following estimates of the probability of smoking (mean of smoker): 2

group ˆp SE( ˆp) All Workers 0.242 0.004 No Smoking Ban 0.290 0.007 Smoking Ban 0.212 0.005 b. This question asks you to determine if the workplace smoking ban alters the probability of smoking using a linear probability model. The LPM yields Variable ˆβ SE( ˆβ) Intercept 0.290 0.007 Smoking Ban -0.078 0.009 The resulting t-statistic on the smoking ban dummy variable is 8.66, so the coefficient is statistically significant. Notice that the intercept is the same as ˆp in part (a) for those cases without a smoking ban. c. In this question, you are asked to estimate a more general LPM, including a wide variety of variables and to compare the estimated impact of a smoking ban in the case. The resulting regression parameter estimates are: Variable ˆβ SE( ˆβ) Intercept -0.014 0.041 Smoking Ban -0.047 0.009 female -0.033 0.009 age 0.010 0.002 age 2-0.00013 0.00002 hsdrop 0.323 0.019 hsgrad 0.233 0.013 colsome 0.164 0.013 colgrad 0.045 0.012 black -0.028 0.016 hispanic -0.105 0.014 From model in (c) the estimated difference is -0.047, smaller than the effect in model (b). Evidently (b) suffers from omitted variable bias. That is, smkban may be correlated with the education/race/gender indicators or with age. For example, workers with a college degree are more likely to work in an office with a smoking ban than high-school dropouts, and college graduates are less likely to smoke than high-school dropouts. d. The t-statistic is -5.27, so the coefficient is statistically significant at the 1% level. e. The F-statistic (140.09) has a p-value of < 0.01, so the coefficients are significant. The omitted education status is Masters degree or higher. Thus the coefficients show the increase in probability relative to someone with a postgraduate degree. For example, the coefficient on Colgrad is 0.045, so the probability of smoking for a college graduate is 0.045 (4.5%) higher than for someone with a postgraduate degree. Similarly, the coefficient on HSdrop is 0.323, so the probability of smoking for a college graduate is 0.323 (32.3%) higher than for someone with a postgraduate degree. Because the coefficients are all positive and get smaller as educational attainment increases, the probability of smoking falls as educational attainment increases. E11.2 This question continues the analysis of the smoking data set, now focusing on estimating probit models. a. This first question asks you to use the same variables as in E11.1(c), but this time in a probit model. The resulting parameter estimates are: 3

Variable ˆβ SE( ˆβ) Intercept -1.735 0.152 Smoking Ban -0.158 0.029 female -0.112 0.029 age 0.035 0.007 age 2-0.00047 0.00008 hsdrop 1.142 0.073 hsgrad 0.883 0.060 colsome 0.677 0.061 colgrad 0.235 0.065 black -0.084 0.053 hispanic -0.338 0.049 b. The t-statistic is -5.47, very similar to the value for the linear probability model. Again, we would reject that smkban has a zero coefficient. c. The F-statistic (now 447.34) is significant at the 1% level, as in the linear probability model. d. In this case, you are asked to compute the smoking probability for Mr. A with and without a smoking ban in place and to compute the effect of the smoking ban. We have P r[y i = 1 Mr.A, noban] = Φ[ 1.735 + 0.035 (20) 0.00047 (20 2 ) + 1.142] = Φ[ 0.090] = 0.464 P r[y i = 1 Mr.A, ban] = Φ[ 1.735 + 0.035 (20) 0.00047 (20 2 ) + 1.142 0.159] = Φ[ 0.249] = 0.402 Therefore the workplace bans would reduce the probability of smoking by 0.062 (6.2%). e. This question asks you to repeat your calculations using Ms. B, who is female, 40-years old and a college graduate. In this case, we get: P r[y i = 1 Ms.B, noban] = Φ[ 1.735 + 0.035 (40) 0.00047 (40 2 ) + 0.235 0.111] = Φ[ 1.064] = 0.144 P r[y i = 1 Ms.B, ban] = Φ[ 1.735 + 0.035 (40) 0.00047 (40 2 ) + 0.235 0.111 0.159] = Φ[ 1.222] = 0.111 Therefore the workplace bans would reduce the probability of smoking by 0.034 (3.4%). 4

; Problem Set #6 ; # delimit ; clear; cap log close; ; Specify the output file ; log using Problemset6.log,replace; set more off; ; Read in and summarize the data ; use Smoking.dta; describe; summarize smoker; summarize smoker if smkban==0; summarize smoker if smkban==1; ; Estimate the model for question E11.1b ; reg smoker smkban,r; ; Estimate the model for question E11.1c ; generate age2 = age^2; reg smoker smkban female age age2 hsdrop hsgrad colsome colgrad black hispanic,r; test hsdrop hsgrad colsome colgrad; ; Estimate the model for question E11.2a ; probit smoker smkban female age age2 hsdrop hsgrad colsome colgrad black hispanic,r; test hsdrop hsgrad colsome colgrad; scalar A1 = (_b[_cons] + _b[age]20 + _b[age2](20^2) + _b[hsdrop]); scalar A2 = (_b[_cons] + _b[age]20 + _b[age2](20^2) + _b[hsdrop] + _b[smkban]); scalar PA1 = normal(a1); scalar PA2 = normal(a2); scalar B1 = (_b[_cons] + _b[age]40 + _b[age2](40^2) + _b[colgrad] + _b[female] + _b[black]); scalar B2 = (_b[_cons] + _b[age]40 + _b[age2](40^2) + _b[colgrad]

+ _b[female] + _b[black] + _b[smkban]); scalar PB1 = normal(b1); scalar PB2 = normal(b2); scalar list; log close; clear; exit;

Problemset6.log ------ ------------------------------------- log: C:\Documents and Settings\jaherrig\My Documents\Classes\Economics 371\Stata\Problemset6.log log type: text opened on: 18 Nov 2008, 13:03:55. set more off;. ;. > Read in and summarize the data > > ;. use Smoking.dta;. describe; Contains data from Smoking.dta obs: 10,000 vars: 10 11 Feb 2002 16:44 size: 140,000 (86.6% of memory free) ------ ------------------------------------- storage display value variable name type format label variable label ------ ------------------------------------- smoker byte %8.0g =1 if a current smoker smkban byte %9.0g =1 if there is a work area smoking bans age byte %9.0g age in years hsdrop byte %9.0g =1 if hs dropout hsgrad byte %9.0g =1 if hs grad colsome byte %9.0g =1 if some college colgrad byte %9.0g =1 if college grad black byte %9.0g =1 if black hispanic byte %9.0g =1 if hispanic female byte %9.0g =1 if female ------ ------------------------------------- Sorted by:. summarize smoker; Variable Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- smoker 10000.2423.4284963 0 1. summarize smoker if smkban==0; Variable Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- smoker 3902.2895951.4536326 0 1. summarize smoker if smkban==1; Variable Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- smoker 6098.2120367.4087842 0 1. ;. Page 1

Problemset6.log > Estimate the model for question E11.1b > > ;. reg smoker smkban,r; Linear regression Number of obs = 10000 F( 1, 9998) = 75.06 Prob > F = 0.0000 R-squared = 0.0078 Root MSE =.42684 Robust smoker Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- smkban -.0775583.008952-8.66 0.000 -.0951061 -.0600106 _cons.2895951.0072619 39.88 0.000.2753604.3038298. ;. > Estimate the model for question E11.1c > > ;. generate age2 = age^2;. reg smoker smkban female age age2 hsdrop hsgrad colsome colgrad black > hispanic,r; Linear regression Number of obs = 10000 F( 10, 9989) = 68.75 Prob > F = 0.0000 R-squared = 0.0570 Root MSE =.41631 Robust smoker Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- smkban -.0472399.0089661-5.27 0.000 -.0648153 -.0296645 female -.0332569.0085683-3.88 0.000 -.0500525 -.0164612 age.0096744.0018954 5.10 0.000.005959.0133898 age2 -.0001318.0000219-6.02 0.000 -.0001747 -.0000889 hsdrop.3227142.0194885 16.56 0.000.2845128.3609156 hsgrad.2327012.0125903 18.48 0.000.2080217.2573807 colsome.1642968.0126248 13.01 0.000.1395495.189044 colgrad.0447983.0120438 3.72 0.000.02119.0684066 black -.0275658.0160785-1.71 0.086 -.0590828.0039513 hispanic -.1048159.0139748-7.50 0.000 -.1322093 -.0774226 _cons -.0141099.0414228-0.34 0.733 -.0953069.0670872. test hsdrop hsgrad colsome colgrad; ( 1) hsdrop = 0 ( 2) hsgrad = 0 ( 3) colsome = 0 ( 4) colgrad = 0 F( 4, 9989) = 140.09 Prob > F = 0.0000. ; Page 2

Problemset6.log. > Estimate the model for question E11.2a > > ;. probit smoker smkban female age age2 hsdrop hsgrad colsome colgrad black > hispanic,r; Iteration 0: log pseudolikelihood = -5537.1662 Iteration 1: log pseudolikelihood = -5239.2916 Iteration 2: log pseudolikelihood = -5235.8717 Iteration 3: log pseudolikelihood = -5235.8679 Probit regression Number of obs = 10000 Wald chi2(10) = 542.94 Prob > chi2 = 0.0000 Log pseudolikelihood = -5235.8679 Pseudo R2 = 0.0544 Robust smoker Coef. Std. Err. z P> z [95% Conf. Interval] -------------+---------------------------------------------------------------- smkban -.15863.0291099-5.45 0.000 -.2156843 -.1015757 female -.1117313.028841-3.87 0.000 -.1682585 -.055204 age.0345114.0068839 5.01 0.000.0210192.0480035 age2 -.0004675.0000826-5.66 0.000 -.0006295 -.0003056 hsdrop 1.141611.0729706 15.64 0.000.9985909 1.28463 hsgrad.8826711.0603703 14.62 0.000.7643475 1.000995 colsome.6771195.0614445 11.02 0.000.5566904.7975486 colgrad.2346842.0654161 3.59 0.000.106471.3628974 black -.0842789.0534536-1.58 0.115 -.1890461.0204883 hispanic -.3382743.0493523-6.85 0.000 -.435003 -.2415457 _cons -1.734927.1519801-11.42 0.000-2.032802-1.437051. test hsdrop hsgrad colsome colgrad; ( 1) hsdrop = 0 ( 2) hsgrad = 0 ( 3) colsome = 0 ( 4) colgrad = 0 chi2( 4) = 447.34 Prob > chi2 = 0.0000. scalar A1 = (_b[_cons] + _b[age]20 + _b[age2](20^2) + _b[hsdrop]);. scalar A2 = (_b[_cons] + _b[age]20 + _b[age2](20^2) + _b[hsdrop] > + _b[smkban]);. scalar PA1 = normal(a1);. scalar PA2 = normal(a2);. scalar B1 = (_b[_cons] + _b[age]40 + _b[age2](40^2) + _b[colgrad] > + _b[female] + _b[black]);. scalar B2 = (_b[_cons] + _b[age]40 + _b[age2](40^2) + _b[colgrad] > + _b[female] + _b[black] + _b[smkban]);. scalar PB1 = normal(b1);. scalar PB2 = normal(b2); Page 3

. scalar list; PB2 =.11076088 PB1 =.14369569 B2 = -1.2224917 B1 = -1.0638616 PA2 =.40178304 PA1 =.46410205 A2 = -.2487346 A1 = -.09010459 Problemset6.log. log close; log: C:\Documents and Settings\jaherrig\My Documents\Classes\Economics 371\Stata\Problemset6.log log type: text closed on: 18 Nov 2008, 13:03:55 ------ ------------------------------------- Page 4