Types of economic data

Similar documents
1 A Non-technical Introduction to Regression

ECON 4230 Intermediate Econometric Theory Exam

The Simple Linear Regression Model

Freeing up the Classical Assumptions. () Introductory Econometrics: Topic 5 1 / 94

2. Linear regression with multiple regressors

ACE 564 Spring Lecture 8. Violations of Basic Assumptions I: Multicollinearity and Non-Sample Information. by Professor Scott H.


Panel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation

CHAPTER 5 FUNCTIONAL FORMS OF REGRESSION MODELS

Applied Econometrics (QEM)

Chapter 4: Regression Models

Ordinary Least Squares Regression Explained: Vartanian

Multiple Regression Analysis

Chapter 16. Simple Linear Regression and Correlation

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Inferences for Regression

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS


Regression Analysis. BUS 735: Business Decision Making and Research

ES103 Introduction to Econometrics

Keller: Stats for Mgmt & Econ, 7th Ed July 17, 2006

Intermediate Econometrics

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.

1 The Multiple Regression Model: Freeing Up the Classical Assumptions

Trendlines Simple Linear Regression Multiple Linear Regression Systematic Model Building Practical Issues

Making sense of Econometrics: Basics

4.1 Least Squares Prediction 4.2 Measuring Goodness-of-Fit. 4.3 Modeling Issues. 4.4 Log-Linear Models

Outline. Possible Reasons. Nature of Heteroscedasticity. Basic Econometrics in Transportation. Heteroscedasticity

Chapter 16. Simple Linear Regression and dcorrelation

2) For a normal distribution, the skewness and kurtosis measures are as follows: A) 1.96 and 4 B) 1 and 2 C) 0 and 3 D) 0 and 0

Heteroscedasticity 1

Mathematics for Economics MA course

Multiple Regression. Midterm results: AVG = 26.5 (88%) A = 27+ B = C =

Regression Analysis. BUS 735: Business Decision Making and Research. Learn how to detect relationships between ordinal and categorical variables.

Lectures 5 & 6: Hypothesis Testing

FinQuiz Notes

Chapter 8 Heteroskedasticity

Econometrics. 9) Heteroscedasticity and autocorrelation

Okun's Law Testing Using Modern Statistical Data. Ekaterina Kabanova, Ilona V. Tregub

FAQ: Linear and Multiple Regression Analysis: Coefficients

CHAPTER 6: SPECIFICATION VARIABLES

Econometrics Part Three

Chapter 27 Summary Inferences for Regression

Chapter 2 The Simple Linear Regression Model: Specification and Estimation

Statistics for Managers using Microsoft Excel 6 th Edition

Multiple Regression Analysis: Heteroskedasticity

Introduction to Econometrics. Heteroskedasticity

Regression Models. Chapter 4. Introduction. Introduction. Introduction

Linear Regression with 1 Regressor. Introduction to Econometrics Spring 2012 Ken Simons

Christopher Dougherty London School of Economics and Political Science

Econometrics Homework 1

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data

Applied Statistics and Econometrics

Homoskedasticity. Var (u X) = σ 2. (23)

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econometrics Summary Algebraic and Statistical Preliminaries

Multiple Regression Analysis

Econometrics - 30C00200

Business Statistics. Lecture 10: Course Review

LECTURE 11. Introduction to Econometrics. Autocorrelation

appstats27.notebook April 06, 2017

An overview of applied econometrics

LECTURE 10. Introduction to Econometrics. Multicollinearity & Heteroskedasticity

1 Motivation for Instrumental Variable (IV) Regression

REED TUTORIALS (Pty) LTD ECS3706 EXAM PACK

Review of Econometrics

Basic Business Statistics 6 th Edition

Finding Relationships Among Variables

Heteroskedasticity. Part VII. Heteroskedasticity

10. Time series regression and forecasting

Chapter 4. Regression Models. Learning Objectives

Lecture 3: Multiple Regression

Section 3: Simple Linear Regression

Econometrics Honor s Exam Review Session. Spring 2012 Eunice Han

The OLS Estimation of a basic gravity model. Dr. Selim Raihan Executive Director, SANEM Professor, Department of Economics, University of Dhaka

Chapter 7: Correlation and regression

1 Regression with Time Series Variables

Introduction to Econometrics

The multiple regression model; Indicator variables as regressors

Week 11 Heteroskedasticity and Autocorrelation

Chapte The McGraw-Hill Companies, Inc. All rights reserved.

Semester 2, 2015/2016

Practice Questions for the Final Exam. Theoretical Part

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

Inference with Simple Regression

Using EViews Vox Principles of Econometrics, Third Edition

Nonlinear Regression Functions

WISE International Masters

EC4051 Project and Introductory Econometrics

2 Prediction and Analysis of Variance

Chapter 1. An Overview of Regression Analysis. Econometrics and Quantitative Analysis. What is Econometrics? (cont.) What is Econometrics?

Answer all questions from part I. Answer two question from part II.a, and one question from part II.b.

Final Exam - Solutions

11.1 Gujarati(2003): Chapter 12

Correlation Analysis

ECON 497 Midterm Spring

Microeconometria Day # 5 L. Cembalo. Regressione con due variabili e ipotesi dell OLS

Lecture 8. Using the CLR Model. Relation between patent applications and R&D spending. Variables

Transcription:

Types of economic data Time series data Cross-sectional data Panel data 1

1-2

1-3

1-4

1-5

The distinction between qualitative and quantitative data The previous data sets can be used to illustrate an important distinction between types of data. The microeconomist s data on sales will have a number corresponding to each firm surveyed (e.g. last month s sales in the first company surveyed were 20,000). This is referred to as quantitative data. The labor economist, when asking whether or not each surveyed employee belongs to a union, receives either a Yes or a No answer. These answers are referred to as qualitative data. Such data arise often in economics when choices are involved (e.g. the choice to buy or not buy a product, to take public transport or a private car, to join or not to join a club). 6

7 Economists will usually convert these qualitative answers into numeric data. For instance, the labor economist might set Yes = 1 and No = 0. Hence, Y1 = 1 means that the first individual surveyed does belong to a union, Y2 = 0 means that the second individual does not. When variables can take on only the values 0 or 1, they are referred to as dummy (or binary) variables.

What is Descriptive Research? Involves gathering data that describe events and then organizes, tabulates, depicts, and describes the data. Uses description as a tool to organize data into patterns that emerge during analysis. Often uses visual aids such as graphs and charts to aid the reader 8

Descriptive Research takes a what is approach What is the best way to provide access to computer equipment in schools? Do teachers hold favorable attitudes toward using computers in schools? What have been the reactions of school administrators to technological innovations in teaching? 9

Descriptive Research We will want to see if a value or sample comes from a known population. That is, if I were to give a new cancer treatment to a group of patients, I would want to know if their survival rate, for example, was different than the survival rate of those who do not receive the new treatment. What we are testing then is whether the sample patients who receive the new treatment come from the population we already know about (cancer patients without the treatment). 10

Data transformations: levels versus growth rates For instance, you may take raw time series data on the variables W = total consumption expenditure, and X = expenditure on food, and create a new variable: Y = the proportion of expenditure devoted to food. Here the transformation would be Y = X/W. The percentage change in real GDP between period t and t + 1 is calculated according to the formula: Yt 1 Yt % change x100 Y t 11 The percentage change in real GDP is often referred to as the growth of GDP or the change in GDP.

Index Numbers 12 Suppose a price index for four years exists, and the values are: Y1 = 100, Y2 = 106, Y3 = 109 and Y4 = 111. These numbers can be interpreted as follows. The first year has been selected as a base year and, accordingly, Y1 = 100. The figures for other years are all relative to this base year and allow for a simple calculation of how prices have changed since the base year. For instance, Y2 = 106 means that prices have risen from 100 to 106 a 6% rise since the first year. A price index is very good for measuring changes in prices over time, but should not be used to talk about the level of prices. For instance, it should not be interpreted as an indicator of whether prices are high or low.

Obtaining Data Most macroeconomic data is collected through a system of national accounts Microeconomic data is usually collected by surveys of households, employment and businesses, which are often available from the same sources. Two of the more popular ones are Datastream by Thomson Financial (http://www.datastream.com/) and Wharton Research Data Services (http://wrds.wharton.upenn.edu/). With regards to free data, a more limited choice of financial data is available through popular Internet ports such as Yahoo! (http://finance.yahoo.com). 13

Working with data: graphical methods it is important for you to summarize Charts and tables are very useful ways of presenting your data. There are many different types (e.g. bar chart, pie chart, etc.). With time series data, a chart that shows how a variable evolves over time is often very informative. However, in the case of cross-sectional data, such methods are not appropriate and we must summarize the data in other ways 14

Histogram One convenient way of summarizing this data is through a histogram. To construct a histogram, begin by constructing class intervals or bins that divide the countries into groups based on their GDP per capita. This same information is graphed in a simple fashion in the histogram. Graphing allows for a quick visual summary of the cross-country distribution of GDP per capita. 15

XY Plots the relationships between two or more variables. For instance: Are higher education levels and work experience associated with higher wages among workers in a given industry? Are changes in the money supply a reliable indicator of inflation changes? Do differences in capital investment explain why some countries are growing faster than others? graphical methods can be used to draw out some simple aspects of the relationship between two variables. XY-plots (also called scatter diagrams) are particularly useful. The XY-plot can be used to give a quick visual impression of the relationship between deforestation and population density. 16

Mean The mean is the statistical term for the average. The mathematical formula for the mean is given by: N: sample size Y : summation operator N i 1 N Y i 17

Standard Deviation A more common measure of dispersion is the standard deviation. (Confusingly atisticians refer to the square of the standard deviation as the variance.) Its mathmatical formula is given by: N i1 ( Y N i Y 1 ) 2 18

Correlation Correlation is an important way of numerically quantifying the relationship between two variables: r xy the correlation between variables X and Y, N i i N i i N i i i xy X X Y Y X X Y Y r 1 2 1 2 1 ) ( ) ( ) )( ( 19

Properties of correlation 1. r always lies between -1 and 1, which may be written as -1<r <1. 2. Positive values of r indicate a positive correlation between X and Y. Negative values indicate a negative correlation. r = 0 indicates that X and Y are uncorrelated. 3. Larger positive values of r indicate stronger positive correlation. r = 1 indicates perfect positive correlation. Larger negative values 1 of r indicate stronger negative correlation. r =-1 indicates perfect negative correlation. 4. The correlation between Y and X is the same as the correlation between X and Y. 5. The correlation between any variable and itself (e.g. the correlation between Y and Y) is 1 20

One of the most important things in empirical work is knowing how to interpret your results. The house example illustrates this difficulty well. It is not enough just to report a number for a correlation (e.g. rxy = 0.54). Interpretation is important too. Interpretation requires a good intuitive knowledge of what a correlation is in addition to a lot of common sense about the economic phenomenon under study. 21

Correlation between several variables For instance, house prices depend on the lot size, number of bedrooms, number of bathrooms and many other characteristics of the house. if we have three variables, X, Y and Z, then there are three possible correlations (i.e. r XY, r XZ and r YZ ). However, if we add a fourth variable, W, the number increases to six (i.e. r XY, r XZ, r XW, r YZ, r YW and r ZW ). In general, for M different variables there will be M x (M - 1)/2 possible correlations. A convenient way of ordering all these correlations is to construct a matrix or table. 22

23 An Introduction to Simple Regression Regression is the most important tool applied economists use to understand the relationship among two or more variables. It is particularly useful for the common case where there are many variables (e.g. unemployment and interest rates, the money supply, exchange rates, inflation, etc.) and the interactions between them are complex. We can express the linear relationship between Y and X mathematically as: Y X where a is the intercept of the line and b is the slope. This equation is referred to as the regression line. If in actuality we knew what a and b were, then we would know what the relationship between Y and X was

4-24

4-25

4-26

Best Fitting Line The linear regression model will always be only an approximation of the true relationship But they will also depend on many other factors for which it is impossible to collect data.the omission of these variables from the regression model will mean that the model makes an error. We call all such errors. The regression model can now be written as: Y X 27 In the regression model, Y is referred to as the dependent variable, X the explanatory variable, and α and β, coefficients. It is common to implicitly assume that the explanatory variable causes Y, and the coefficient b measures the influence of X on Y.

OLS Estimation The OLS estimator is the most popular estimator of β, OLS estimator is chosen to minimize: SSE OLS estimator ˆ N i1 N i1 N i1 2 i X Y i X var( ˆ) 2 i i 2 X 2 i 28

Interpreting OLS estimates we obtained OLS estimates for the intercept and slope of the regression line. The question now arises: how should we interpret these estimates? The intercept in the regression model, a, usually has little economic interpretation so we will not discuss it here. However, b is typically quite important. This coefficient is the slope of the best fitting straight line through the XY-plot. dy dx Derivatives measure how much Y changes when X is changed by a small (marginal) amount. Hence, b can be interpreted as the marginal effect of X on Y and is a measure of how much X influences Y. To be more precise, we can interpret b as a measure of how much Y tends to change when X is changed by one unit 29

Fitted values and R 2 : measuring the fit of a regression model it is possible that the best fit is not a very good fit at all. Appropriately, it is desirable to have some measure of fit (or a measure of how good the best fitting line is). The most common measure of fit is referred to as the R2. It relates closely to the correlation between Y and X. Another way to express this residual is in terms of the difference between the actual and fitted values of Y. That is: u i Y i Yˆ i 30

31 Recall that variance is the measure of dispersion or variability of the data. Here we define a closely related concept, the total sum of squares or TSS: TSS N ( Y i Y ) i1 The regression model seeks to explain the variability in Y through the explanatory variable X. It can be shown that the total variability in Y can be broken into two parts as: TSS=RSS + SSR where RSS is the regression sum of squares, a measure of the explanation provided by the regression model. RSS is given by: 2 RSS N ( Yˆ i Y i1 ) 2

Remembering that SSR is the sum of squared residuals and that a good fitting regression model will make the SSR very small, we can combine the equations above to yield measure of fit; 2 SSR or 2 RSS R 1 R TSS TSS Intuitively, the R 2 measures the proportion of the total variance of Y that can be explained by X. Note that TSS, RSS and SSR are all sums of squared numbers and, hence, are all non-negative. This implies TSS RSS and TSS SSR. Using these facts, it can be seen that 0 R 2 1. 32

33 Further intuition about this measure of fit can be obtained by noting that small values of SSR indicate that the regression model is fitting well. A regression line which fits all the data points perfectly in the XY-plot will have no errors and hence SSR = 0 and R 2 = 1. Looking at the formula above, you can see that values of R 2 near 1 imply a good fit and that R2 = 1 implies a perfect fit. In sum, high values of R2 imply a good fit and low values a bad fit.

7-34

Hypothesis Testing The other major activity of the empirical researcher is hypothesis testing. An example of a hypothesis that a researcher may want to test is β = 0. If the latter hypothesis is true, then this means that the explanatory variable has no explanatory power. Hypothesis testing procedures allow us to carry out such tests 35

Hypothesis Testing Purpose: make inferences about a population parameter by analyzing differences between observed sample statistics and the results one expects to obtain if some underlying assumption is true. Null hypothesis: Alternative hypothesis: Z X If the null hypothesis is rejected then the alternative hypothesis is accepted. n H H 0 1 : : 12.5Kbytes 12.5Kbytes 36

Hypothesis Testing A sample of 50 files from a file system is selected. The sample mean is 12.3Kbytes. The standard deviation is known to be 0.5 Kbytes. H H 0 1 : 12.5Kbytes : 12.5Kbytes Confidence: 0.95 Critical value =NORMINV(0.05,0,1)= -1.645. Region of non-rejection: Z -1.645. So, do not reject Ho. (Z exceeds critical value) 37

Steps in Hypothesis Testing 1. State the null and alternative hypothesis. 2. Choose the level of significance a. 3. Choose the sample size n. Larger samples allow us to detect even small differences between sample statistics and true population parameters. For a given a, increasing n decreases b. 4. Choose the appropriate statistical technique and test statistic to use (Z or t). 38

5. Determine the critical values that divide the regions of acceptance and nonacceptance. 6. Collect the data and compute the sample mean and the appropriate test statistic (e.g., Z). 7. If the test statistic falls in the non-reject region, Ho cannot be rejected. Else Ho is rejected. 39

Z test versus t test 1.Z-test is a statistical hypothesis test that follows a normal distribution while T-test follows a Student s T-distribution. 2. A T-test is appropriate when you are handling small samples (n < 30) while a Z-test is appropriate when you are handling moderate to large samples (n > 30). 3. T-test is more adaptable than Z-test since Z-test will often require certain conditions to be reliable. Additionally, T-test has many methods that will suit any need. 4. T-tests are more commonly used than Z-tests. 5. Z-tests are preferred than T-tests when standard deviations are known. 40

One tail versus two tail we were only looking at one tail of the distribution at a time (either on the positive side above the mean or the negative side below the mean). With two-tail tests we will look for unlikely events on both sides of the mean (above and below) at the same time. So, we have learned four critical values. 1-tail 2-tail α =.05 1.64 1.96, -1.96 α =.01 2.33 2.58/-2.58 Notice that you have two critical values for a 2-tail test, both positive and negative. You will have only one critical value for a one-tail test (which could be negative). 41

Which factors affect the accuracy of the estimate ˆ 1. Having more data points improves accuracy of estimation. 2. Having smaller errors improves accuracy of estimation. Equivalently, if the SSR is small or the variance of the errors is small, the accuracy of the estimation will be improved. 3. Having a larger spread of values (i.e. a larger variance) of the explanatory variable (X) improves accuracy of estimation. 42

Calculating a confidence interval for β If the confidence interval is small, it indicates accuracy. Conversely, a large confidence interval indicates great uncertainty over β s true value. Confidence interval for β is ˆ t b or s b is the standard deviation of s b, ˆ t Large values of s b will imply large uncertainty b s b ˆ ˆ ˆ t b s b ˆ t b s b 43

44 Standard Deviation Standard deviation of : ˆ SSR ( N 2) ( X X ) 2 i which measures the variability or uncertainty in, 1. s b and, hence, the width of the confidence interval, varies directly with SSR (i.e. more variable ˆ errors/residuals imply less accurate estimation). 2. s b and, hence, the width of the confidence interval, vary inversely with N (i.e. more data points imply more accurate estimation). 3. s b and, hence, the width of the confidence interval, vary inversely with Σ(Xi - X) 2 (i.e. more variability in X implies more accurate estimation). s b ˆ

The third number in the formula for the confidence interval is tb. t b is a value taken from statistical tables for the Student-t distribution. 1. t b decreases with N (i.e. the more data points you have the smaller the confidence interval will be). 2. t b increases with the level of confidence you choose. 45

46 Testing whether β = 0 we say that this is a test of H 0 : β = 0 against H 1 : β 0 Note that, if b = 0 then X does not appear in the regression model; that is, the explanatory variable fails to provide any explanatory power whatsoever for the dependent variable. hypothesis tests come with various levels of significance. If you use the confidence interval approach to hypothesis testing, then the level of significance is 100% minus the confidence level. That is, if a 95% confidence interval does not include zero, then you may say I reject the hypothesis that β= 0 at the 5% level of significance (i.e. 100% - 95% = 5%). If you had used a 90% confidence interval (and found it did not contain zero) then you would say: I reject the hypothesis that β = 0 at the 10% level of significance.

Test Statistics The alternative way of carrying out hypothesis testing is to calculate a test statistic. In the case of testing whether β = 0, the test statistic is known as a t-statistic (or t-ratio or t-stat). It is calculated as: t s b Large values of t ( large in an absolute sense) indicate that b 0, while small values indicate that β = 0. In a formal statistical sense, the test statistic is large or small relative to a critical value taken from statistical tables of the Student-t distribution. 47

P-Value 1. If the P-value is less than 5% (usually written as 0.05 by the computer) then t is large and we conclude that β 0. 2. If the P-value is greater than 5% then t is small and we conclude that β = 0. (If we choose 5% level of significance) 48

49 As a practical summary, note that regression techniques provide the following information about b: 1. ˆ, the OLS point estimate, or best guess, of what β is. 2. The 95% confidence interval, which gives an interval where we are 95% confident β will lie. 3. The standard deviation (or standard error) of ˆ, s b, which is a measure of how accurate ˆ ˆ is. s b is also a key component in the mathematical formula for the confidence interval and the test statistic for testing β = 0. 4. The test statistic, t, for testing β = 0. 5. The P-value for testing b = 0. ˆ These five components, (, confidence interval, s b, t and the P- value) are usually printed out in a row in computer packages

how regression results are presented and interpreted? Table 5.2 The regression of deforestation on population density. Coefficient St.Er. t-stat. P-value Lower 95% Upper 95% Intercept 0.599 0.112 5.341 1.15E 06 0.375 0.824 X variable 0.0008 0.00011 7.227 5.5E 10 0.00061 0.00107 ˆ = 0.000842, indicating that increasing population density by one person per hectare is associated with an increase in deforestation rates of 0.000842%. The columns labeled Lower 95% and Upper 95% give the lower and upper bounds of the 95% confidence interval. For this data set, and as discussed previously, the 95% confidence interval for β is [0.00061, 0.001075]. Thus, we are 95% confident that the marginal effect of population density on deforestation is between 0.00061% and 0.001075%. 50 ˆ

The hypothesis test of β = 0 can be done in two equivalent ways. First, we can find a 95% confidence interval for b of [0.00061, 0.001075]. Since this interval does not contain 0, we can reject the hypothesis that β = 0 at the 5% level of significance. In other words, there is strong evidence for the hypothesis that β 0 and that population density has significant power in explaining deforestation. Second, we can look at the P-value which is 5.5 \ 10-10,6 and much less than 0.05. This means that we can reject the hypothesis that population density has no effect on deforestation at the 5% level of significance. In other words, we have strong evidence that population density does indeed affect deforestation rates. 51

Hypothesis testing involving R 2 : the F-statistic R 2 is a measure of how well the regression line fits the data or, equivalently, of the proportion of the variability in Y that can be explained by X. If R 2 = 0 then X does not have any explanatory power for Y. The test of the hypothesis R 2 = 0 can therefore be interpreted as a test of whether the regression explains anything at all 2 F ( N 2) R 2 1 R we use the P-value to decide what is large and what is small (i.e. whether R 2 is significantly different from zero or not) 52

F-Test Usage of the F-test We use the F-test to evaluate hypotheses that involved multiple parameters. Let s use a simple setup: Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + ε i 53

F-Test For example, if we wanted to know how economic policy affects economic growth, we may include several policy instruments (balanced budgets, inflation, trade-openness, &c) and see if all of those policies are jointly significant. After all, our theories rarely tell us which variable is important, but rather a broad category of variables. 54

F-statistic The test is performed according to the following strategy: 1. If Significance F is less than 5% (i.e. 0.05), we conclude R 2 0. 2. If Significance F is greater than 5% (i.e. 0.05), we conclude R 2 = 0. 55

Assumptions OLS is unbiased under the classical assumptions E( ˆ) Variance of OLS estimator under the classical assumptions var( ˆ) 2 X 2 i Distribution of OLS estimator under classical assumptions ˆ isn (, 2 2 X i ) 56

The Gauss-Markov Theorem If the classical assumptions hold, then OLS is the best, linear unbiased estimator, where best = minimum variance linear = linear in y. Short form: "OLS is BLUE If heteroskedasticity is present, you should use either Generalized Least Squares (GLS) estimator or HCE 57

Heteroscedasticity Heteroskedasticity occurs when the error variance differs across observations. : Errors mis-pricing of houses If a house sells for much more (or less) than comparable houses, then it will have a big error Suppose small houses all very similar, then unlikely to be large pricing errors Suppose big houses can be very different than one another, then possible to have large pricing errors If this story is true, then expect errors for big houses to have larger variance than for small houses Statistically: heteroskedasticity is present and is associated with the size of the house 58

Assumption 2 replaced by for i = 1,..,N. Note: w i varies with i so error variance differs across observations GLS proceeds by transforming model and then using OLS on transformed model Transformation comes down to dividing all observations by w i Problem with GLS: in practice, it can be hard to figure out what w i is To do GLS need to find answer to questions like: Is w i proportional to an explanatory (var( variable? ˆ)) If yes, then which one? Thus HCEs are popular. (Gretl) 2 var( ) Use OLS estimates, but corrects its variance correct confidence intervals, hypothesis tests, etc. so as to get Advantages: HCEs are easy to calculate and you do not need to know the form that the heteroskedasticity takes. Disadvantages: HCEs are not as e cient as the GLS estimator (i.e.they will have larger variance). 59 i 2 i (var(ˆ))

If heteroskedasticity is NOT present, then OLS is.ne (it is BLUE). But if it is present, you should use GLS or a HCE Thus, it is important to know if heteroskedasticity is present. There are many tests, such as; White test and the Breusch Pagan test These are done in Gretl 60

Consequences of Heteroscedasticity Still unbiased, but the standard errors not correctly estimated so significance tests invalid Predictions inefficient Coefficient of determination not valid Important to test for any heteroscedasticity and remedy it if present 61

Detecting Heteroscedasticity (II) Number of formal tests: Koenkar test linear form of heteroscedasticity assumption Regress the square of the estimated error term on the explanatory variables, if these variables are jointly significant, then you reject the null of homoscedastic errors White test most popular, available in Eviews Same like Koenkar test, but you include also the squares of explanatory variables and their cross products (do not include crossproducts, if you have many explanatory variables multicollinearity likely) 62

Corrective measures for Heteroscedasticty Transform the model Take logs or shares Or White-wash them, get White robust standard errors and covariance, they are robust to unknown form of heteroscedasticity, works quite well, especially if you have large sample size You may use Weighted LS, if you know the form of heteroscedasticity, unlikely in practice 63

Autocorrelation More likely in time-series Autocorrelation means that there is some kind of relationship between the i-th and ij- th error term Say if error term is large in time t, it is very likely that it will remain high next period, but classical assumptions want no relationship! Possible Causes of Autocorrelation Inertia and cyclicality Functional misspecification Omitted variables 64

Detection of autocorrelation (I) Graphical method good starting point Plot the estimated error term with time Any systematic pattern suggest autocorrelation Formal tests: Durbin Watson test for AR(1) D-W test statistic roughly equal to 2(1-correlation) D-W statistic symmetrically distributed around 2, if it is far from 2, there is autocorrelation, there is a grey zone, where you cant conclude about autocorrelation based on D-W test, negative autocorrelation if test statistic enough larger than 2, opposite for positive autocorrelation Critical values found in most textbooks, values depend on sample size and number of explanatory variables 65

Corrective Measures of Autocorrelation First differencing Quasi differencing Or Newey-West wash, analogy of Whitewashing, it makes your standard errors and covariance robust to unknown form of autocorrelation, available for E-views 66

67 Before we do this, it is useful to make a distinction between errors and residuals. The error is defined as the distance between a particular data point and the true regression line. Mathematically, we can rearrange the regression model to write Y X This is the error for the i th observation. However, if we replace α and β by their estimates ˆ and ˆ, we get a straight line which is generally a little different from the true regression line. The deviations from this estimated regression line are called residuals. We will use the notation u when we refer to residuals. That is, the residuals are given by ˆ u i Y If you find the distinction between errors and residuals confusing, you can probably ignore and assume errors and residuals are the same thing. However, if you plan on further study of econometrics, this distinction becomes crucial. i ˆ X i i i i

The residuals are the vertical difference between a data point and the line. A good fitting line will have small residuals. The usual way of measuring the size of the residuals is by means of the sum of squared residuals (SSR), which is given by: SSR N i1 2 u i We want to find the best fitting line which minimizes the sum of squared residuals. For this reason, estimates found in this way are called least squares estimates 68

Nonlinearity in regression Assume that the relationship between Y and X is of the form; Y 6X 2 such that the true relationship is quadratic i i A very common transformation, of both the dependent and explanatory variables, is the logarithmic transformation. Why is it common to use ln(y ) as the dependent variable and ln(x) as the explanatory variable? First, the expressions will often allow us to interpret results quite easily. Second, data transformed in this way often does appear to satisfy the linearity assumption of the regression model. 69

Nonlinearity in regression ln( Y) (ln X ) β can be interpreted as an elasticity. Recall that, in the basic regression without logs, we said that Y tends to change by b units for a one unit change in X. In the regression containing both logged dependent and explanatory variables, we can now say that Y tends to change by β percent for a one percent change in X. That is, instead of having to worry about units of measurements, regression results using logged variables are always interpreted as elasticities. 70

71

8-72

8-73