ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

Similar documents
ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

The Simple Linear Regression Model: Reporting the Results and Choosing the Functional Form

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H.

(a) Set up the least squares estimation procedure for this problem, which will consist in minimizing the sum of squared residuals. 2 t.

Solutions to Odd Number Exercises in Chapter 6

Distribution of Estimates

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

20. Applications of the Genetic-Drift Model

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Time series Decomposition method

GMM - Generalized Method of Moments

Comparing Means: t-tests for One Sample & Two Related Samples

Chapter 2. First Order Scalar Equations

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Properties of Autocorrelated Processes Economics 30331

Two Coupled Oscillators / Normal Modes

Estimation Uncertainty

DEPARTMENT OF STATISTICS

Let us start with a two dimensional case. We consider a vector ( x,

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

Chapter 11. Heteroskedasticity The Nature of Heteroskedasticity. In Chapter 3 we introduced the linear model (11.1.1)

Wednesday, November 7 Handout: Heteroskedasticity

Unit Root Time Series. Univariate random walk

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Vehicle Arrival Models : Headway

Lecture 33: November 29

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

4.1 Other Interpretations of Ridge Regression

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

Measurement Error 1: Consequences Page 1. Definitions. For two variables, X and Y, the following hold: Expectation, or Mean, of X.

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

OBJECTIVES OF TIME SERIES ANALYSIS

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

5.1 - Logarithms and Their Properties

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Regression with Time Series Data

How to Deal with Structural Breaks in Practical Cointegration Analysis

Some Basic Information about M-S-D Systems

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Lab #2: Kinematics in 1-Dimension

The Multiple Regression Model: Hypothesis Tests and the Use of Nonsample Information

(10) (a) Derive and plot the spectrum of y. Discuss how the seasonality in the process is evident in spectrum.

Lecture 4. Classical Linear Regression Model: Overview

Hypothesis Testing in the Classical Normal Linear Regression Model. 1. Components of Hypothesis Tests

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Math 10B: Mock Mid II. April 13, 2016

Dynamic Econometric Models: Y t = + 0 X t + 1 X t X t k X t-k + e t. A. Autoregressive Model:

Wednesday, December 5 Handout: Panel Data and Unobservable Variables

4.1 - Logarithms and Their Properties

Math 333 Problem Set #2 Solution 14 February 2003

ECON 482 / WH Hong Time Series Data Analysis 1. The Nature of Time Series Data. Example of time series data (inflation and unemployment rates)

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

ESTIMATION OF DYNAMIC PANEL DATA MODELS WHEN REGRESSION COEFFICIENTS AND INDIVIDUAL EFFECTS ARE TIME-VARYING

The Brock-Mirman Stochastic Growth Model

KINEMATICS IN ONE DIMENSION

The general Solow model

10. State Space Methods

Explaining Total Factor Productivity. Ulrich Kohli University of Geneva December 2015

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

y = β 1 + β 2 x (11.1.1)

Final Spring 2007

Testing for a Single Factor Model in the Multivariate State Space Framework

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3

3.1 More on model selection

Chapter 16. Regression with Time Series Data

Solutions: Wednesday, November 14

DEPARTMENT OF ECONOMICS

Testing the Random Walk Model. i.i.d. ( ) r

Matlab and Python programming: how to get started

FITTING EQUATIONS TO DATA

Notes on Kalman Filtering

1. Diagnostic (Misspeci cation) Tests: Testing the Assumptions

5. Stochastic processes (1)

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

References are appeared in the last slide. Last update: (1393/08/19)

Mathematical Theory and Modeling ISSN (Paper) ISSN (Online) Vol 3, No.3, 2013

EXERCISES FOR SECTION 1.5

Cointegration and Implications for Forecasting

( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is

Introduction to Probability and Statistics Slides 4 Chapter 4

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims

= ( ) ) or a system of differential equations with continuous parametrization (T = R

Stochastic Model for Cancer Cell Growth through Single Forward Mutation

Solutions from Chapter 9.1 and 9.2

Decimal moved after first digit = 4.6 x Decimal moves five places left SCIENTIFIC > POSITIONAL. a) g) 5.31 x b) 0.

Chapter 5. Heterocedastic Models. Introduction to time series (2008) 1

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS

Predator - Prey Model Trajectories and the nonlinear conservation law

Econ Autocorrelation. Sanjaya DeSilva

Problem 1 / 25 Problem 2 / 20 Problem 3 / 10 Problem 4 / 15 Problem 5 / 30 TOTAL / 100

Transcription:

ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model Specificaion and Esimaion," Ch. 5 in Learning and Pracicing Economerics Noaion Warning: I previously disinguished random variables (Y) from heir realized sample values (y). Following HGJ, I will no longer do his. Conex of equaion should make he disincion clear. ACE 56, Universiy of Illinois a Urbana-Champaign 4-

Overview Previously, we examined one economic variable a a ime We will now focus on wo economic variables Primary objecive of economic analysis is o undersand he relaionship beween economic variables Key quesion: How o use he informaion conained in samples of economic daa o learn abou unknown parameers of economic relaionships? When i is believed ha values of one variable are sysemaically deermined by anoher variable, simple linear regression can be used o model he relaionship Simple linear regression model is a specificaion of he process ha we believe describes he relaionship beween wo variables ACE 56, Universiy of Illinois a Urbana-Champaign 4-

Sar wih wo variables y and x For each observaion, we assume ha x is generaed "ouside" he process given by he simple linear regression model For each level of x, we assume ha y is generaed by he following simple linear regression model y = β + β x + e Since we only acually observe one sample of daa, our objecive is o esimae he parameers β and β ) of he above model ( ACE 56, Universiy of Illinois a Urbana-Champaign 4-3

ACE 56, Universiy of Illinois a Urbana-Champaign 4-4

ACE 56, Universiy of Illinois a Urbana-Champaign 4-5

he Problem Wha is he relaionship beween average household expendiure on food and income? o answer his quesion, we mus exend economic and saisical models considered Lecure 3 Focused on food expendiure of households of size 3 wih annual income of $5,0000 Populaion of ineres is now all households of size 3, regardless of income level Why is knowledge of he relaionship beween food expendiure and income imporan Micro-level decision? Macro-level decisions? ACE 56, Universiy of Illinois a Urbana-Champaign 4-6

he Economic Model Define household expendiure on food as y and household income as x Economic heory shows ha y = f( x) and we expec he relaionship o be posiive We need o be more precise abou he relaionship in order o ulimaely esimae he parameers of he relaionship Linear vs. non-linear In pracice, never know exac form of he relaion Use economic heory and informaion in sample o make a reasonable choice ACE 56, Universiy of Illinois a Urbana-Champaign 4-7

ACE 56, Universiy of Illinois a Urbana-Champaign 4-8

For simpliciy, le's assume a linear relaionship is reasonable, y = β + β x Conras his model wih he one considered earlier, y = β he linear economic model has wo parameers β : inercep, which shows he level of food expendiure when income is zero β : slope, which shows how much food expendiure changes when income changes ACE 56, Universiy of Illinois a Urbana-Champaign 4-9

Ofen ineresed in relaionship beween y and x in percenage erms, or elasiciy Poin elasiciy formula, η y dy x x = = β dx y y where y η is he elasiciy of food expendiure wih respec o income ACE 56, Universiy of Illinois a Urbana-Champaign 4-0

ACE 56, Universiy of Illinois a Urbana-Champaign 4-

he Saisical Model he linear economic model predics ha food expendiure for a given level of income will be he same for all consumers No scaer of poins around he line in Figure 5. Recognize ha acual expendiure for a given level of income will no be he same for all consumers, where, y = β+ βx + e =,..., y is he dependen variable x is he independen, or explanaory, variable e is he error, or disurbance, erm is he number of consumers in he sample Noe ha x is assumed o be he same for all consumers ACE 56, Universiy of Illinois a Urbana-Champaign 4-

Moivaion for adding he error erm is similar o earlier argumens, bu more deail is helpful Combined effec of oher influences In realiy, a large number of independen variables in addiion o income affec food expendiure Assume he oher independen variables are unobservable, or we would include hem in economic model Approximaion error Linear form of model may only be an approximaion of he rue relaionship beween income and food expendiure Random componen of human behavior Knowledge of all variables ha influence an individual's food expendiure, may no be sufficien o explain ha expendiure ACE 56, Universiy of Illinois a Urbana-Champaign 4-3

o complee he saisical model, we mus specify he assumpions abou he error erm, Ee ( ) = 0 var( e ) = E[ e E( e )] = E[ e ] = σ e are independen so ha Cov( ee s) = 0 for all s e follows a normal disribuion his can be summarized using he following noaion, e ~ N(0, σ ) =,..., As we saw in Lecure 3, his is referred o as he iid normaliy assumpion, which is shorhand for idenical, independenly disribued normal random variables Noe: he simple linear regression model does no require he assumpion ha he error erm is normally disribued. However, i is ypically assumed. ACE 56, Universiy of Illinois a Urbana-Champaign 4-4

Now, le's explore some of he implicaions of he saisical model y = β+ βx + e =,..., Someimes he saisical model is referred o as he "daa generaing process" for y For a given observaion, y can be hough of as having wo componens A sysemaic componen β+ βx, ha is deermined by an economic process A random componen e, ha is deermined by a probabilisic process Anoher way of saying he same hing is ha he random variable y is simply a linear ransformaion of he random variable e y = a+ be where a= β + β x and b= (Noe ha a is a parameer for each observaion because x is fixed for ha observaion, bu a akes on differen values for differen observaions) ACE 56, Universiy of Illinois a Urbana-Champaign 4-5

We can examine he saisical properies more formally by considering he expeced value of food expendiure, Ey [ ] = E[ β + β x+ e] Ey [ ] = E[ β ] + E[ β x] + Ee [ ] Ey [ ] = β + β x Shows ha he expeced value of food expendiure, or "average" expendiure, is a linear funcion of income Now, reconsider he original saisical model, y = β + β x + e From above, we can subsiue for β+ βx as follows, y = E[ y ] + e Allows a new inerpreaion of he saisical model ACE 56, Universiy of Illinois a Urbana-Champaign 4-6

Wriing he saisical model as, y = E[ y ] + e Allows us o hink of observed food expendiure as consising of wo componens, Ey [ ]: expeced, or mean, food expendiure, which will be he same for all consumers a a given level of income e : a random componen ha is unique o each consumer We generaed he same inerpreaion in he earlier consan mean model Now, he crucial difference is ha he mean componen is a funcion, raher han a consan he mean componen varies linearly wih he level of income Ey [ ] = β+ β x ACE 56, Universiy of Illinois a Urbana-Champaign 4-7

ACE 56, Universiy of Illinois a Urbana-Champaign 4-8

Now, consider he variance of food expendiure var[ y] = E ( y E[ y]) var[ y] = E ( y β βx) var[ y ] = E e = σ his resul is equivalen o saying ha he variance of food expendiure (or he error erm) is no relaed o he level of income Nex, consider he covariance of food expendiure beween wo values y and y s, [ ] cov[ y, y ] = E ( y E[ y ])( y E[ y ]) s s s [ β β β β ] cov[ y, y ] = E ( y x )( y x ) s s s [ ] cov[ y, y ] = E ee = 0 s s s Hence, if he errors are independen, as implied by random sampling, hen he selecion of one consumer does no influence wheher anoher will be seleced ACE 56, Universiy of Illinois a Urbana-Champaign 4-9

Assumpions of he Simple Linear Regression Model SR. y = β+ β x + e, =,..., SR. Ee ( ) = 0 Ey ( ) = β+ βx SR3. var( e ) = var( y ) = σ SR4. cov[ e, e ] = cov[ y, y ] = 0 s s s SR5. he variable x is no random and mus ake on a leas wo differen values SR6. e ~ N(0, σ ) y ~ N( β + β x, σ ) ACE 56, Universiy of Illinois a Urbana-Champaign 4-0

ACE 56, Universiy of Illinois a Urbana-Champaign 4-

ACE 56, Universiy of Illinois a Urbana-Champaign 4-

Esimaing he Parameers for he Saisical Model of he Food Expendiure and Income Relaionship he saisical model "explains" how he sample of household expendiure daa is generaed he problem a hand is how o use he sample informaion on y and x o esimae he unknown parameers β and β One approach is o simply draw a line hrough he scaer of poins ha seems o "bes fi" he daa "Eyeball economerics" While eyeball analysis may be useful as a saring poin, here are several weaknesses o his approach: Highly subjecive; wo researchers looking a he same graph may choose differen lines endency o ignore ouliers For mos researchers, will work only for one y and one x (wo dimensions) ACE 56, Universiy of Illinois a Urbana-Champaign 4-3

ACE 56, Universiy of Illinois a Urbana-Champaign 4-4

ACE 56, Universiy of Illinois a Urbana-Champaign 4-5

ACE 56, Universiy of Illinois a Urbana-Champaign 4-6

Jus as in he consan mean model, we need a rule o sysemaically esimae β and β based on he observed sample of daa Noice ha for a given level of x, Ey [ ] = β + β x, or he "cener" of he pdf for y Suggess he "cener" of sample daa may yield good esimaes of he populaion parameers β and β he only difference from he consan mean model is ha he "cener" varies wih he level of income he principle of leas squared disance can again be used o find he desired esimaes Minimize he sum of squares of he verical disances beween he line and he sample observaions ACE 56, Universiy of Illinois a Urbana-Champaign 4-7

y x Minimizing he sum of squared errors ACE 56, Universiy of Illinois a Urbana-Champaign 4-8

o begin he formal derivaion, le's resae he saisical model, y = β + β x + e which can be re-wrien as, e = y β β x hen, given he sample observaions on y and x, our objecive is o minimize he following funcion, = = = = S( β, β ) e ( y β β x ) Since he values for y are known, S is solely a funcion of he unknown parameers β and β Expanding he square, we obain, = + + + = S( β, β ) ( y β x β y β x y β xβ β ) ACE 56, Universiy of Illinois a Urbana-Champaign 4-9

Wih furher re-arranging, = + + + = = = = = S( β, β ) y β β x β y β xy ββ x For he sample of 40 households, = 40 x = 79 y = 943.78 = = xy = 69435.0404 x = 006.30 = = = y = 4875.065 and based on hese compuaions he sum of squares relaionship is, S( β, β ) = 4875.07 + 40β + 006.3β 887.56β 38870.08β + 5584β β his funcion is a quadraic in erms of he unknown parameers β and β "bowl-shaped" funcion ACE 56, Universiy of Illinois a Urbana-Champaign 4-30

ACE 56, Universiy of Illinois a Urbana-Champaign 4-3

Minimum value of funcion is found by aking he parial differenials of S wih respec o β and β, S = ( y β βx)( ) β = S = ( y β βx)( x) β = he values of β and β ha make he parial derivaives equal zero are he leas squares esimaors, which are denoed b and b Subsiuing and seing each parial derivaive equal o zero, ( y b b x )( ) = 0 = ( y b b x )( x ) = 0 = ACE 56, Universiy of Illinois a Urbana-Champaign 4-3

If we muliply boh sides of each equaion by (-), hey can be re-wrien in he following form: = ( y + b + b x ) = 0 and, ( yx + bx + bx ) = 0 = y + b + b x = 0 = = yx b x b x = = = + + = 0 Wih a lile more re-arranging, we can arrive a he following equaions, b + b x = y = = + = = = = b x b x y x he previous wo equaions are known as he normal equaions in leas squares regression ACE 56, Universiy of Illinois a Urbana-Champaign 4-33

Now, we have wo unknowns and wo knowns in each equaion, and we can solve for b and b Muliply he firs normal equaion by second by, x and he = xb + b x = x y = = = = + = = = = xb x b y x ACE 56, Universiy of Illinois a Urbana-Champaign 4-34

Now subrac he firs of he above wo equaions from he second, x x b y x x y = = = = = = or, b y x x y = = = = x x = = which is he leas squares esimaor for he slope ACE 56, Universiy of Illinois a Urbana-Champaign 4-35

Now, having found he leas squares esimaor of he slope, b, les solve for b he firs normal equaion is, b + b x = y = = Simply divide his equaion by, b + b x = y = = or, b+ bx = y b = y bx ACE 56, Universiy of Illinois a Urbana-Champaign 4-36

o summarize, he leas squares esimaors for he inercep and slope of he regression line are, b = y bx b y x x y = = = = x x = = You may see he formula for b saed in several differen forms he derivaion for one widely used formula can be found by subsiuing he formula for b ino he second normal equaion as follows, ( ) = = = y b x x + b x = y x Expanding and re-arranging, y x b x x x y x + = = = = = ACE 56, Universiy of Illinois a Urbana-Champaign 4-37

Or, b = yx y x = = x x x = = Recognizing ha = x = x, we can hen wrie he following soluion for b, which known as he compuaional formula, b = = = yx x yx x ACE 56, Universiy of Illinois a Urbana-Champaign 4-38

Anoher version of he formula is, b = = ( x x)( y y) = ( x x) Which is equivalen o, b = ( x x)( y y) = ( x x) = b ˆ σ sample covariance of x and = = sample variance of x xy ˆ σ x y his is ofen a helpful inerpreaion of he leas squares slope esimaor A good exercise is o derive his form of he formula from he firs one presened ACE 56, Universiy of Illinois a Urbana-Champaign 4-39

Finally, he formula is ofen saed in deviaions from he mean form Le, hen, x = x x and y = y y * * b = = = yx * * x * Imporan Poins All of he differen formulas for he leas squares esimaor b are equivalen Assuming arihmeic accuracy, all formulas will yield exacly he same numerical values for a given sample of x and y hree Noable Properies of LS Esimaes ACE 56, Universiy of Illinois a Urbana-Champaign 4-40

. Sum of he esimaed errors always equals zero Firs, noe ha he esimaed error for each observaion is simply he acual observaion on y minus he value projeced by he esimaed regression line, or e = y b b x ˆ Condiion "enforced" by he firs normal equaion ( y b b x )( ) = 0 = or = eˆ = 0 ACE 56, Universiy of Illinois a Urbana-Champaign 4-4

. Esimaed regression line mus pass hrough he sample means of x and y (cenroid) Shown by firs noing ha b = y b x, which can be re-wrien as y = b+ bx 3. Zero correlaion beween he esimaed errors and x, he explanaory variable Condiion "enforced" by he second normal equaion ( y b b x )( x ) = 0 = or = ex ˆ = 0 No endency of esimaed errors for observaions above (below) he mean of x o be posiive (negaive) and vice versa ACE 56, Universiy of Illinois a Urbana-Champaign 4-4

Esimaes for he Household Expendiure Funcion Based on he daa from 40 randomly seleced households, we can compue esimaes of β and β as follows, b = y bx = 3.5945 (0.353)(69.8) = 7.383 b y x x y (40)(69435.04) (79)(943.78) = = = = = (40)(006.3) (79) x x = = = 0.33 I is useful o repor he esimaes in erms of he esimaed relaionship beween y and x, yˆ = 7.383 + 0.33x where y ˆ is he esimae of he expeced (mean ) food expendiure for a given level of income Someimes y ˆ is called he "fied value" of y y ˆ is a poin on he LS line for a given x ACE 56, Universiy of Illinois a Urbana-Champaign 4-43

ACE 56, Universiy of Illinois a Urbana-Champaign 4-44

Sample Regression Oupu from Excel SUMMARY OUPU Regression Saisics Muliple R 0.56309607 R Square 0.370775 Adjused R Square 0.990547 Sandard Error 6.8449384 Observaions 40 ANOVA df SS MS F Significance F Regression 86.6357 86.635 7.6438 0.0005536 Residual 38 780.4573 46.8596 oal 39 607.04779 Coefficiens Sandard Error Sa P-value Lower 95% Upper 95% Inercep 7.3837543 4.008356335.84956 0.07396-0.73759 5.4977 X Variable 0.35333 0.0559349 4.00378 0.00055 0.03763 0.3448903 ACE 56, Universiy of Illinois a Urbana-Champaign 4-45

Inerpreing he Leas Squares Esimaes yˆ = 7.383 + 0.33x Inercep (b ) = 7.38 lierally is an esimae of he expeced (average) level of food expendiure per week when income is zero Cauion needs o be exercised when inerpreing inercep esimaes Usually lile if any observaions around zero for independen variable Suggess esimae may no be very reliable in his range of he independen variable Slope (b ) = 0.33 is an esimae of he expeced (average) change in food expendiure per week when income increases by one uni In his case, slope esimae indicaes food expendiure per week is expeced o increase by $0.3 when income per week increases by $ ACE 56, Universiy of Illinois a Urbana-Champaign 4-46

Imporan o emphasize ha slope indicaes expeced change on average when income increases one uni, no he acual change Remember ha he acual change will in all likelihood differ from he expeced change due o he error erm We can hink of he expeced change as being on he line while he acual change is off he line Income elasiciy of food expendiure also may be of ineres Recall he formula, η y dy x x = = β dx y y Replacing β wih b we can esimae he income elasiciy as, x ˆy η = b y his sill leaves he quesion of wha levels of x and y o use in esimaing he elasiciy ACE 56, Universiy of Illinois a Urbana-Champaign 4-47

I is convenional o use he sample means, since ha is a represenaive poin on he regression line, η = b ˆy x y In his case, he esimaed income elasiciy is, 69.800 ˆ η y = 0.33 = 0.687 3.595 Elasiciy ˆ η y = 0.687 is an esimae of he expeced (average) percen change in food expendiure per week when income increases by one percen In his case, elasiciy esimae indicaes food expendiure per week is expeced o increase by 0.687 percen when income per week increases by percen Remember ha income elasiciy esimae will vary for differen poins on he esimaed regression line ACE 56, Universiy of Illinois a Urbana-Champaign 4-48

Lineariy and Oher Funcional Forms In he simple regression example considered here, only a "sraigh-line" relaionship beween food expendiure and income was considered A non-linear relaionship may well be more appropriae he simple regression model is more flexible han i appears x and y variables can be ransformaions, such as logarihms, squares, cubes, or reciprocals his raises he quesion of wha do we mean when we sae ha he simple regression model is linear here are wo definiions of lineariy ACE 56, Universiy of Illinois a Urbana-Champaign 4-49

Lineariy in variables: only a power of one on x or y, y = β + β x + e Yes! y = β + β x + e No! 4 y = β + β x + e No! ln( y ) = β + β ln( x ) + e No! Lineariy in parameers: only a power of one on βor β, bu higher powers and/or ransformaions are allowed on x and/or y, y = β + β x + e Yes! y = β + β x + e No! y = β + β x + e No! 3 ln( y ) = β + β ln( x ) + e Yes! y = β + ln( β ) x + e No! ACE 56, Universiy of Illinois a Urbana-Champaign 4-50

he definiion of lineariy used in he simple linear regression model is linear in parameers Allows considerable flexibiliy in he specificaion of he funcional form of he model We will sudy alernaive funcional forms ha are linear in parameers nex semeser ACE 56, Universiy of Illinois a Urbana-Champaign 4-5