VII. Serially Correlated Residuals/ Autocorrelation
|
|
- Sarah Martin
- 5 years ago
- Views:
Transcription
1 VII. Serially Correlated Residuals/ Autocorrelation A.Graphical Detection of Serially Correlated (Autocorrelated) Residuals In multiple regression, we typically assume and 2 ( ) ε ~ N 0,Iσ % ( i j) cov ε,ε = 0, i j When the second assumption is not met, we have Serially Correlated Residuals or Autocorrelation. When autocorrelation is present: - The estimated regression coefficients are still unbiased but may no longer have the minimum variance among all unbiased estimates (they are inefficient) - The Mean Squared Error (s 2 ) tends to underestimate σ 2 (often by a great deal). This leads directly to: s bj overestimation of, which in turn results in underestimation of confidencntervals and significance for hypothesis tests that a regression parameter equals zero underestimation of the test statistics for the F test (and so significance) for resulting confidence regions and tests of the hypotheses about combinations of parameters
2 Note that the existence of autocorrelation suggests that residuals are related in one of two manners: - chronologically or - some other logical order suggested by practical circumstances This implies that residuals may related across multiple observations we refer to the correlation of residuals across s observations as ρ s. We refer to this as a Lag-s serial correlation. If ρ s is positive, residuals tend to have the same sign as their lag-s counterpart (for lag-1 this is commonly called attraction). If ρ s is negative, residuals tend to have the opposite sign of their lag-s counterpart (for lag-1 this is commonly called repulsion). Note that if autocorrelation is present, we would expect a plot of residuals in chronological order (or some other logical order suggested by practical circumstances) to have some distinct pattern (depending on the nature of the relationship between the residuals). Let s look at some examples of i) chronological (or ordered) residual plots and corresponding plots of residuals against their lag-s values.
3 We have already used a chronological or ordered plot of residuals to aid in detection of serial correlation. Here we appear to have a first-order positive autocorrelation: Chronological Plot of Residuals Time The pattern is even more evident when we plot the residual against its one period lag: First Order Positive Serial Correlation -1
4 This chronological or ordered plot of residuals suggests that we have a first-order negative autocorrelation: Chronological Plot of Residuals Time Again, the pattern is even more evident when we plot the residual against its one period lag: First Order Negative Serial Correlation -1
5 What does this chronological (or ordered) plot of residuals suggest? Chronological Plot of Residuals Time It is difficult to discern patterns beyond lag-1 by using the chronological (or ordered) plot of residuals. Let s look at the plot of residuals against their two period lag: Second Order Negative Serial Correlation -2
6 What does this chronological (or ordered) plot of residuals suggest? Chronological Plot of Residuals Time Again, let s look at the plot of residuals against their two period lag: Second Order Positive Serial Correlation -2
7 Of course, certain orders appear with greater frequency in time series data. We often see -1 st order autocorrelation -2 nd order autocorrelation -4 th order autocorrelation -7 th order autocorrelation -12 th order autocorrelation As always, the analyst should have a theoretical reason to suspect a certain order autocorrelation (if we just look through the data at every possible order, we are bound to find something that is actually spurious). SAS Procedures that could produce such plots: PROC REG DATA=Salary; MODEL income=age yrseduc; OUTPUT OUT=regdata PREDICTED=yhat RESIDUAL=error; DATA REGDATA; SET REGDATA; lag1res=lag1(error); lag2res=lag2(error); PROC PLOT; PLOT error*num; PROC PLOT; PLOT error*lag1res; PROC PLOT; PLOT error*lag2res; PROC CORR; VAR error; WITH lag1res lag2res; RUN;
8 8 ˆ A 6 ˆ 4 ˆ A The SAS PROC PLOT output for the plot of residuals vs. order (time) looks like this: ˆ 2 A R A A A e s i d 0 ˆ u a l -2 ˆ A A -4 ˆ A -6 ˆ A -8 ˆ Šˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆ Order of Data A more aesthetically appealing graphical presentation can be produced after the residuals armported into Excel: Chronological Plot of Residuals Time
9 8 ˆ A 6 ˆ 4 ˆ A The SAS PROC PLOT output for the plot of residuals vs. lag-1 residuals looks like this: ˆ 2 A R A A A e s i d 0 ˆ u a l -2 ˆ A A -4 ˆ -6 ˆ A -8 ˆ Šƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒ lag1res Again, the plot for the plot of residuals vs. lag-1 residuals produced using Excel is more aesthetically appealing: First Order Serial Correlation -1
10 8 ˆ A 6 ˆ 4 ˆ A The SAS PROC PLOT output for the plot of residuals vs. lag-2 residuals looks like this: ˆ 2 A R A B e s i d 0 ˆ u a l -2 ˆ A -4 ˆ -6 ˆ A -8 ˆ Šƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒˆƒƒ lag2res Again for improved clarity, the plot for the plot of residuals vs. lag-2 residuals produced using Excel is provided: Second Order Serial Correlation -2
11 B. Testing for Autocorrelation The Durbin-Watson Test If we wish to fit a postulated linear model k u 0 i iu u i=1 Y = β + β X + ε by least squares to observations Y u, X 1u, X 2u,, X ku, u = 1,,n, we would assume and 2 ε ~ N ( 0,σ ) ( i j) cov ε,ε = 0, i j which is equivalent to saying ρ s = 0. Given the potentially serious consequences associated with autocorrelation, testing is critical when theory suggests its potential existence. The Durbin-Watson test (Von Neumann, 1941; Durbin & Watson, 1951) can be used to test the hypothesis H 0 : ρ s = 0 H 1 : ρ s = ρ (s) where ρ (s) is any ρ s such that 0 < ρ s < 1. Note that the alternate hypothesis arises from the assumption that the errors are such that ε u = ρε u + z u, z u ~ N(0,σ 2 ) and cov(ε u, ε u ) = 0, cov(ε u,z u ) = 0, u u. This leads to the conclusion that 2 σ ε ~ N 0, 2 1-ρ
12 The Durbin-Watson test statistic is ' n e 0 e 0 2 ( e ) u - e % - % - u-1 u=2 0 e 0 e d = = % % n ' 2 ee eu %% u=1 to detect first-order autocorrelation (relationship between the residual and its one-period lag). It can be shown that: - the distribution of the Durbin-Watson test statistic depends on and is not independent of the regressor data - the distribution of the Durbin-Watson test statistic is symmetric about 2.00 and ranges from 0 to 4 (although extreme values are only possible for very large samples) - positive serial correlation results in a d near 0 - negative serial correlation results in a d near 4 Steps in the Two-Tailed Durban-Watson Test 1. When d < 2 we actually test d, whilf d > 2 we test 4 d 2. Compare the appropriate result (d or 4 d) to values of d L and d U (from appropriate Tables) - If the appropriate result (d or 4 d) < d L, reject H 0 at the 2α level of significance, and conclude that serial correlation may exist (positivf we are using d, negativf we are using 4 d) - If the appropriate result (d or 4 d) > d U, do not reject H 0 at the 2α level of significance, and conclude that serial correlation probably doesn t exist - If the appropriate result d L (d or 4 d) d U, the results arndeterminate at the 2α level of significance
13 Steps in the Lower-Tailed Durban-Watson Test 1. Test 4 d 2. Compare 4 d to values of d L and d U (from appropriate Tables) - If 4 d < d L, reject H 0 at the α level of significance, and conclude that negative serial correlation may exist - If 4 d > d U, do not reject H 0 at the α level of significance, and conclude that negative serial correlation probably doesn t exist - If d L 4 d d U, the results arndeterminate at the α level of significance Steps in the Upper-Tailed Durban-Watson Test 1. Test d 2. Compare d to values of d L and d U (from appropriate Tables) - If d < d L, reject H 0 at the α level of significance, and conclude that positive serial correlation may exist - If d > d U, do not reject H 0 at the α level of significance, and conclude that positive serial correlation probably doesn t exist - If d L d d U, the results arndeterminate at the α level of significance
14 Note that - in each case (two-tailed, lower-tailed, and uppertailed tests) some researchers combine the reject and indeterminate regions into a larger reject region - thndeterminate region narrows rapidly as the sample sizs increased - some researchers argue that inclusion of a lag of the response variable as a regressor renders the Durbin- Watson test ineffective (they argue that such a model biased the DW test toward nonrejection). Rayner (1994) argues that the DW test is still superior to other approaches under such conditions. C. Testing for Autocorrelation The Runs Test This is a quick nonparametric approximation of the Durbin-Watson test. The Runs test considers the patterns in the signs of the residuals in chronological (or other) order. For the Runs test we have n 1 (or n + ) = # positive residuals n 2 (or n - ) = # negative residuals r = # of runs (or the number of times the ordered sequence changes sign + 1) For example, if we had the following ordered signs: we would have n 1 =14, n 2 = 12, and r = 11.
15 For extremely small problems we would enumerate all the possible orders in which n 1 positives and n 2 negatives could be arranged, i.e., n 1 + n2 n 1 + n2 n or 1 n, 2 then found the relative frequency or in this case probability) that r or fewer runs would occur. For example, suppose there are five observations or residuals, two of which are positive. The only possible orderings are: For example, suppose there are five observations or residuals, two of which are positive. The only possible orderings are: Arrangement # of Runs so n 1 + n ! n = = 1 2 2! 5-2! = 10 ( ) If our data actually have only two runs, the probability of this occurring randomly is 0.20.
16 Think about this what is the maximum number of nuns in a data set of size n 1 + n 2 = n? max (r) = 2min(n 1, n 2 ) why? What type of autocorrelation does a large number of runs suggest? First Order Negative Serial Correlation Why does a large number of runs suggest negative autocorrelation? -1 Now think about this what is the minimum number of nuns in a data set of size n 1 + n 2 = n? min (r) = 2 why? What type of autocorrelation does a small number of runs suggest? First Order Positive Serial Correlation Why does a small number of runs suggest positive autocorrelation? -1
17 Obviously, there arssues with using the Runs test: - it ignores much information on the correlation by considering only the signs of residuals - it lacks power for small samples - it is difficult to do exactly for large samples However, the second and third issues can be addressed: - there are tables that provide critical values of r for the Runs test (the tables in our book are provided for 3 n 1, n thers a normal approximation of the Runs test for n 1, n Let nn 1 2 ( 2nn 1 2 -n1 -n2) ( 2 ) ( ) nn n + n n + n n + n µ = + 1,σ = Now we have (approximately) z = 1 r-µ+ 2 σ for a lower tailed test (a test for positive first-order serial correlation) and (approximately) 1 r-µ- 2 z = σ for an upper tailed test (a test for negative first-order serial correlation), and (approximately) r-µ z = σ for a two tailed test (a test for any first-order serial correlation) Continuity Correction
18 Example: Suppose we have the following ordered residuals (which yield a Durban- Watson statistic of d = 0.89) e % = and we hypothesize no first-order serial correlation Our null and alternate hypotheses are: H 0 : ρ s = 0 H 1 : ρ s = ρ (s) and we have n = 22, n 1 = 10 ( 10), n 2 =12 ( 10), and r = 5, so nn ( 12) 120 µ = + 1 = + 1 = + 1 = n 1 + n and ( nn 1 2 ( 2nn 1 2 -n1 -n2) 2 1 2) ( 1 2 ) ( ) ( ) ( ) 2 ( ) ( ) 2 σ = n + n n + n - 1 = = =
19 So the calculated value of our test statistic for our two=tailed test is: r - µ z = = = σ which has a p-value of , so we reject H 0 at any reasonable level of significance and conclude that serial correlation does exist. Notice that this pattern of ordered residuals (which yield a Durban- Watson statistic of d = 1.55) e % = yields exactly the same results for the Runs test (even though the serial correlation is obviously much weaker)!
20 SOME questions you should be able to answer: 1. What is autocorrelation (serial correlation)? What are the ramifications of autocorrelation in a regression model? How do you test for/assess the presence of autocorrelation? How do you correct for the presence of autocorrelation? How do you use SAS to test for autocorrelation? 2. What is the purpose of the Durbin-Watson test? Explain how the Durbin-Watson test statistic works (i.e., explain the interpretations of Durbin-Watson statistic over its range of why the equation effectively performs its specific function). How do you use SAS to perform the Durbin-Watson test? 3. What is the Runs test? How is it used to test for/assess the presence of autocorrelation? What are the strength(s) and weakness(es) of this test for autocorrelation?
Econometrics Part Three
!1 I. Heteroskedasticity A. Definition 1. The variance of the error term is correlated with one of the explanatory variables 2. Example -- the variance of actual spending around the consumption line increases
More informationLECTURE 11. Introduction to Econometrics. Autocorrelation
LECTURE 11 Introduction to Econometrics Autocorrelation November 29, 2016 1 / 24 ON PREVIOUS LECTURES We discussed the specification of a regression equation Specification consists of choosing: 1. correct
More informationLECTURE 13: TIME SERIES I
1 LECTURE 13: TIME SERIES I AUTOCORRELATION: Consider y = X + u where y is T 1, X is T K, is K 1 and u is T 1. We are using T and not N for sample size to emphasize that this is a time series. The natural
More informationEconomics 620, Lecture 13: Time Series I
Economics 620, Lecture 13: Time Series I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 13: Time Series I 1 / 19 AUTOCORRELATION Consider y = X + u where y is
More informationIris Wang.
Chapter 10: Multicollinearity Iris Wang iris.wang@kau.se Econometric problems Multicollinearity What does it mean? A high degree of correlation amongst the explanatory variables What are its consequences?
More informationAUTOCORRELATION. Phung Thanh Binh
AUTOCORRELATION Phung Thanh Binh OUTLINE Time series Gauss-Markov conditions The nature of autocorrelation Causes of autocorrelation Consequences of autocorrelation Detecting autocorrelation Remedial measures
More informationF9 F10: Autocorrelation
F9 F10: Autocorrelation Feng Li Department of Statistics, Stockholm University Introduction In the classic regression model we assume cov(u i, u j x i, x k ) = E(u i, u j ) = 0 What if we break the assumption?
More informationOutline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation
1/30 Outline Basic Econometrics in Transportation Autocorrelation Amir Samimi What is the nature of autocorrelation? What are the theoretical and practical consequences of autocorrelation? Since the assumption
More informationRegression of Time Series
Mahlerʼs Guide to Regression of Time Series CAS Exam S prepared by Howard C. Mahler, FCAS Copyright 2016 by Howard C. Mahler. Study Aid 2016F-S-9Supplement Howard Mahler hmahler@mac.com www.howardmahler.com/teaching
More informationSection 2 NABE ASTEF 65
Section 2 NABE ASTEF 65 Econometric (Structural) Models 66 67 The Multiple Regression Model 68 69 Assumptions 70 Components of Model Endogenous variables -- Dependent variables, values of which are determined
More informationMULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS Page 1 MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level
More informationEconometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague
Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in
More informationLECTURE 10: MORE ON RANDOM PROCESSES
LECTURE 10: MORE ON RANDOM PROCESSES AND SERIAL CORRELATION 2 Classification of random processes (cont d) stationary vs. non-stationary processes stationary = distribution does not change over time more
More informationFinQuiz Notes
Reading 10 Multiple Regression and Issues in Regression Analysis 2. MULTIPLE LINEAR REGRESSION Multiple linear regression is a method used to model the linear relationship between a dependent variable
More informationReading Assignment. Serial Correlation and Heteroskedasticity. Chapters 12 and 11. Kennedy: Chapter 8. AREC-ECON 535 Lec F1 1
Reading Assignment Serial Correlation and Heteroskedasticity Chapters 1 and 11. Kennedy: Chapter 8. AREC-ECON 535 Lec F1 1 Serial Correlation or Autocorrelation y t = β 0 + β 1 x 1t + β x t +... + β k
More informationAnswer all questions from part I. Answer two question from part II.a, and one question from part II.b.
B203: Quantitative Methods Answer all questions from part I. Answer two question from part II.a, and one question from part II.b. Part I: Compulsory Questions. Answer all questions. Each question carries
More informationdf=degrees of freedom = n - 1
One sample t-test test of the mean Assumptions: Independent, random samples Approximately normal distribution (from intro class: σ is unknown, need to calculate and use s (sample standard deviation)) Hypotheses:
More informationLeonor Ayyangar, Health Economics Resource Center VA Palo Alto Health Care System Menlo Park, CA
Skewness, Multicollinearity, Heteroskedasticity - You Name It, Cost Data Have It! Solutions to Violations of Assumptions of Ordinary Least Squares Regression Models Using SAS Leonor Ayyangar, Health Economics
More information1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE
1. You have data on years of work experience, EXPER, its square, EXPER, years of education, EDUC, and the log of hourly wages, LWAGE You estimate the following regressions: (1) LWAGE =.00 + 0.05*EDUC +
More informationApplied Econometrics. Applied Econometrics. Applied Econometrics. Applied Econometrics. What is Autocorrelation. Applied Econometrics
Autocorrelation 1. What is 2. What causes 3. First and higher orders 4. Consequences of 5. Detecting 6. Resolving Learning Objectives 1. Understand meaning of in the CLRM 2. What causes 3. Distinguish
More informationEco and Bus Forecasting Fall 2016 EXERCISE 2
ECO 5375-701 Prof. Tom Fomby Eco and Bus Forecasting Fall 016 EXERCISE Purpose: To learn how to use the DTDS model to test for the presence or absence of seasonality in time series data and to estimate
More informationCircle the single best answer for each multiple choice question. Your choice should be made clearly.
TEST #1 STA 4853 March 6, 2017 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. There are 32 multiple choice
More informationLikely causes: The Problem. E u t 0. E u s u p 0
Autocorrelation This implies that taking the time series regression Y t X t u t but in this case there is some relation between the error terms across observations. E u t 0 E u t E u s u p 0 Thus the error
More informationAnswers: Problem Set 9. Dynamic Models
Answers: Problem Set 9. Dynamic Models 1. Given annual data for the period 1970-1999, you undertake an OLS regression of log Y on a time trend, defined as taking the value 1 in 1970, 2 in 1972 etc. The
More informationAutocorrelation. Think of autocorrelation as signifying a systematic relationship between the residuals measured at different points in time
Autocorrelation Given the model Y t = b 0 + b 1 X t + u t Think of autocorrelation as signifying a systematic relationship between the residuals measured at different points in time This could be caused
More informationSolutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14
Introduction to Econometrics (3 rd Updated Edition) by James H. Stock and Mark W. Watson Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14 (This version July 0, 014) 015 Pearson Education,
More informationLecture 6: Dynamic Models
Lecture 6: Dynamic Models R.G. Pierse 1 Introduction Up until now we have maintained the assumption that X values are fixed in repeated sampling (A4) In this lecture we look at dynamic models, where the
More informationInference for Regression Simple Linear Regression
Inference for Regression Simple Linear Regression IPS Chapter 10.1 2009 W.H. Freeman and Company Objectives (IPS Chapter 10.1) Simple linear regression p Statistical model for linear regression p Estimating
More informationChapter 16. Simple Linear Regression and Correlation
Chapter 16 Simple Linear Regression and Correlation 16.1 Regression Analysis Our problem objective is to analyze the relationship between interval variables; regression analysis is the first tool we will
More informationDiagnostics of Linear Regression
Diagnostics of Linear Regression Junhui Qian October 7, 14 The Objectives After estimating a model, we should always perform diagnostics on the model. In particular, we should check whether the assumptions
More information11.1 Gujarati(2003): Chapter 12
11.1 Gujarati(2003): Chapter 12 Time Series Data 11.2 Time series process of economic variables e.g., GDP, M1, interest rate, echange rate, imports, eports, inflation rate, etc. Realization An observed
More informationAnalysis. Components of a Time Series
Module 8: Time Series Analysis 8.2 Components of a Time Series, Detection of Change Points and Trends, Time Series Models Components of a Time Series There can be several things happening simultaneously
More informationMISCELLANEOUS REGRESSION TOPICS
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 816 MISCELLANEOUS REGRESSION TOPICS I. AGENDA: A. Example of correcting for autocorrelation. B. Regression with ordinary independent
More informationLECTURE 5. Introduction to Econometrics. Hypothesis testing
LECTURE 5 Introduction to Econometrics Hypothesis testing October 18, 2016 1 / 26 ON TODAY S LECTURE We are going to discuss how hypotheses about coefficients can be tested in regression models We will
More informationEconometrics Summary Algebraic and Statistical Preliminaries
Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L
More informationCircle a single answer for each multiple choice question. Your choice should be made clearly.
TEST #1 STA 4853 March 4, 215 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. There are 31 questions. Circle
More informationChapter 16. Simple Linear Regression and dcorrelation
Chapter 16 Simple Linear Regression and dcorrelation 16.1 Regression Analysis Our problem objective is to analyze the relationship between interval variables; regression analysis is the first tool we will
More informationInference for Regression Inference about the Regression Model and Using the Regression Line
Inference for Regression Inference about the Regression Model and Using the Regression Line PBS Chapter 10.1 and 10.2 2009 W.H. Freeman and Company Objectives (PBS Chapter 10.1 and 10.2) Inference about
More informationConfidence Intervals, Testing and ANOVA Summary
Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0
More informationAuto correlation 2. Note: In general we can have AR(p) errors which implies p lagged terms in the error structure, i.e.,
1 Motivation Auto correlation 2 Autocorrelation occurs when what happens today has an impact on what happens tomorrow, and perhaps further into the future This is a phenomena mainly found in time-series
More informationEstimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.
Estimating σ 2 We can do simple prediction of Y and estimation of the mean of Y at any value of X. To perform inferences about our regression line, we must estimate σ 2, the variance of the error term.
More informationCHAPTER 6: SPECIFICATION VARIABLES
Recall, we had the following six assumptions required for the Gauss-Markov Theorem: 1. The regression model is linear, correctly specified, and has an additive error term. 2. The error term has a zero
More informationEconometrics I. Professor William Greene Stern School of Business Department of Economics 25-1/25. Part 25: Time Series
Econometrics I Professor William Greene Stern School of Business Department of Economics 25-1/25 Econometrics I Part 25 Time Series 25-2/25 Modeling an Economic Time Series Observed y 0, y 1,, y t, What
More informationStatistics for Managers using Microsoft Excel 6 th Edition
Statistics for Managers using Microsoft Excel 6 th Edition Chapter 13 Simple Linear Regression 13-1 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value of
More informationReview of Statistics
Review of Statistics Topics Descriptive Statistics Mean, Variance Probability Union event, joint event Random Variables Discrete and Continuous Distributions, Moments Two Random Variables Covariance and
More informationReliability of inference (1 of 2 lectures)
Reliability of inference (1 of 2 lectures) Ragnar Nymoen University of Oslo 5 March 2013 1 / 19 This lecture (#13 and 14): I The optimality of the OLS estimators and tests depend on the assumptions of
More informationMultiple Linear Regression
Multiple Linear Regression Asymptotics Asymptotics Multiple Linear Regression: Assumptions Assumption MLR. (Linearity in parameters) Assumption MLR. (Random Sampling from the population) We have a random
More informationReading Assignment. Distributed Lag and Autoregressive Models. Chapter 17. Kennedy: Chapters 10 and 13. AREC-ECON 535 Lec G 1
Reading Assignment Distributed Lag and Autoregressive Models Chapter 17. Kennedy: Chapters 10 and 13. AREC-ECON 535 Lec G 1 Distributed Lag and Autoregressive Models Distributed lag model: y t = α + β
More informationUnit 10: Simple Linear Regression and Correlation
Unit 10: Simple Linear Regression and Correlation Statistics 571: Statistical Methods Ramón V. León 6/28/2004 Unit 10 - Stat 571 - Ramón V. León 1 Introductory Remarks Regression analysis is a method for
More informationMath 3330: Solution to midterm Exam
Math 3330: Solution to midterm Exam Question 1: (14 marks) Suppose the regression model is y i = β 0 + β 1 x i + ε i, i = 1,, n, where ε i are iid Normal distribution N(0, σ 2 ). a. (2 marks) Compute the
More informationSpatial Regression. 9. Specification Tests (1) Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved
Spatial Regression 9. Specification Tests (1) Luc Anselin http://spatial.uchicago.edu 1 basic concepts types of tests Moran s I classic ML-based tests LM tests 2 Basic Concepts 3 The Logic of Specification
More informationHeteroskedasticity and Autocorrelation
Lesson 7 Heteroskedasticity and Autocorrelation Pilar González and Susan Orbe Dpt. Applied Economics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 7. Heteroskedasticity
More informationECON 4230 Intermediate Econometric Theory Exam
ECON 4230 Intermediate Econometric Theory Exam Multiple Choice (20 pts). Circle the best answer. 1. The Classical assumption of mean zero errors is satisfied if the regression model a) is linear in the
More informationSimple linear regression
Simple linear regression Biometry 755 Spring 2008 Simple linear regression p. 1/40 Overview of regression analysis Evaluate relationship between one or more independent variables (X 1,...,X k ) and a single
More informationAutocorrelation or Serial Correlation
Chapter 6 Autocorrelation or Serial Correlation Section 6.1 Introduction 2 Evaluating Econometric Work How does an analyst know when the econometric work is completed? 3 4 Evaluating Econometric Work Econometric
More informationEC408 Topics in Applied Econometrics. B Fingleton, Dept of Economics, Strathclyde University
EC408 Topics in Applied Econometrics B Fingleton, Dept of Economics, Strathclyde University Applied Econometrics What is spurious regression? How do we check for stochastic trends? Cointegration and Error
More informationFinal Exam. 1. Definitions: Briefly Define each of the following terms as they relate to the material covered in class.
Name Answer Key Economics 170 Spring 2003 Honor pledge: I have neither given nor received aid on this exam including the preparation of my one page formula list and the preparation of the Stata assignment
More informationFinal Exam. Question 1 (20 points) 2 (25 points) 3 (30 points) 4 (25 points) 5 (10 points) 6 (40 points) Total (150 points) Bonus question (10)
Name Economics 170 Spring 2004 Honor pledge: I have neither given nor received aid on this exam including the preparation of my one page formula list and the preparation of the Stata assignment for the
More informationThe Model Building Process Part I: Checking Model Assumptions Best Practice
The Model Building Process Part I: Checking Model Assumptions Best Practice Authored by: Sarah Burke, PhD 31 July 2017 The goal of the STAT T&E COE is to assist in developing rigorous, defensible test
More informationy ˆ i = ˆ " T u i ( i th fitted value or i th fit)
1 2 INFERENCE FOR MULTIPLE LINEAR REGRESSION Recall Terminology: p predictors x 1, x 2,, x p Some might be indicator variables for categorical variables) k-1 non-constant terms u 1, u 2,, u k-1 Each u
More informationLECTURE 10. Introduction to Econometrics. Multicollinearity & Heteroskedasticity
LECTURE 10 Introduction to Econometrics Multicollinearity & Heteroskedasticity November 22, 2016 1 / 23 ON PREVIOUS LECTURES We discussed the specification of a regression equation Specification consists
More informationLinear Model Under General Variance Structure: Autocorrelation
Linear Model Under General Variance Structure: Autocorrelation A Definition of Autocorrelation In this section, we consider another special case of the model Y = X β + e, or y t = x t β + e t, t = 1,..,.
More informationCorrelation Analysis
Simple Regression Correlation Analysis Correlation analysis is used to measure strength of the association (linear relationship) between two variables Correlation is only concerned with strength of the
More information1/34 3/ Omission of a relevant variable(s) Y i = α 1 + α 2 X 1i + α 3 X 2i + u 2i
1/34 Outline Basic Econometrics in Transportation Model Specification How does one go about finding the correct model? What are the consequences of specification errors? How does one detect specification
More informationAssumptions of the error term, assumptions of the independent variables
Petra Petrovics, Renáta Géczi-Papp Assumptions of the error term, assumptions of the independent variables 6 th seminar Multiple linear regression model Linear relationship between x 1, x 2,, x p and y
More informationEcon 300/QAC 201: Quantitative Methods in Economics/Applied Data Analysis. 17th Class 7/1/10
Econ 300/QAC 201: Quantitative Methods in Economics/Applied Data Analysis 17th Class 7/1/10 The only function of economic forecasting is to make astrology look respectable. --John Kenneth Galbraith show
More informationWhat is a Hypothesis?
What is a Hypothesis? A hypothesis is a claim (assumption) about a population parameter: population mean Example: The mean monthly cell phone bill in this city is μ = $42 population proportion Example:
More informationKeller: Stats for Mgmt & Econ, 7th Ed July 17, 2006
Chapter 17 Simple Linear Regression and Correlation 17.1 Regression Analysis Our problem objective is to analyze the relationship between interval variables; regression analysis is the first tool we will
More informationEcon 510 B. Brown Spring 2014 Final Exam Answers
Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity
More informationGLS and related issues
GLS and related issues Bernt Arne Ødegaard 27 April 208 Contents Problems in multivariate regressions 2. Problems with assumed i.i.d. errors...................................... 2 2 NON-iid errors 2 2.
More informationMeasurement Error. Often a data set will contain imperfect measures of the data we would ideally like.
Measurement Error Often a data set will contain imperfect measures of the data we would ideally like. Aggregate Data: (GDP, Consumption, Investment are only best guesses of theoretical counterparts and
More informationExercise Sheet 6: Solutions
Exercise Sheet 6: Solutions R.G. Pierse 1. (a) Regression yields: Dependent Variable: LC Date: 10/29/02 Time: 18:37 Sample(adjusted): 1950 1985 Included observations: 36 after adjusting endpoints C 0.244716
More informationG. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication
G. S. Maddala Kajal Lahiri WILEY A John Wiley and Sons, Ltd., Publication TEMT Foreword Preface to the Fourth Edition xvii xix Part I Introduction and the Linear Regression Model 1 CHAPTER 1 What is Econometrics?
More information7. GENERALIZED LEAST SQUARES (GLS)
7. GENERALIZED LEAST SQUARES (GLS) [1] ASSUMPTIONS: Assume SIC except that Cov(ε) = E(εε ) = σ Ω where Ω I T. Assume that E(ε) = 0 T 1, and that X Ω -1 X and X ΩX are all positive definite. Examples: Autocorrelation:
More informationMultiple Regression Analysis
Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators
More informationEconomics 308: Econometrics Professor Moody
Economics 308: Econometrics Professor Moody References on reserve: Text Moody, Basic Econometrics with Stata (BES) Pindyck and Rubinfeld, Econometric Models and Economic Forecasts (PR) Wooldridge, Jeffrey
More informationRef.: Spring SOS3003 Applied data analysis for social science Lecture note
SOS3003 Applied data analysis for social science Lecture note 05-2010 Erling Berge Department of sociology and political science NTNU Spring 2010 Erling Berge 2010 1 Literature Regression criticism I Hamilton
More informationAny of 27 linear and nonlinear models may be fit. The output parallels that of the Simple Regression procedure.
STATGRAPHICS Rev. 9/13/213 Calibration Models Summary... 1 Data Input... 3 Analysis Summary... 5 Analysis Options... 7 Plot of Fitted Model... 9 Predicted Values... 1 Confidence Intervals... 11 Observed
More information1 Graphical method of detecting autocorrelation. 2 Run test to detect autocorrelation
1 Graphical method of detecting autocorrelation Residual plot : A graph of the estimated residuals ˆɛ i against time t is plotted. If successive residuals tend to cluster on one side of the zero line of
More informationHow To: Deal with Heteroscedasticity Using STATGRAPHICS Centurion
How To: Deal with Heteroscedasticity Using STATGRAPHICS Centurion by Dr. Neil W. Polhemus July 28, 2005 Introduction When fitting statistical models, it is usually assumed that the error variance is the
More information1 Introduction to Generalized Least Squares
ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the
More informationCHAPTER 4: Forecasting by Regression
CHAPTER 4: Forecasting by Regression Prof. Alan Wan 1 / 57 Table of contents 1. Revision of Linear Regression 3.1 First-order Autocorrelation and the Durbin-Watson Test 3.2 Correction for Autocorrelation
More informationLecture 11: Simple Linear Regression
Lecture 11: Simple Linear Regression Readings: Sections 3.1-3.3, 11.1-11.3 Apr 17, 2009 In linear regression, we examine the association between two quantitative variables. Number of beers that you drink
More informationPsychology 282 Lecture #4 Outline Inferences in SLR
Psychology 282 Lecture #4 Outline Inferences in SLR Assumptions To this point we have not had to make any distributional assumptions. Principle of least squares requires no assumptions. Can use correlations
More information9. AUTOCORRELATION. [1] Definition of Autocorrelation (AUTO) 1) Model: y t = x t β + ε t. We say that AUTO exists if cov(ε t,ε s ) 0, t s.
9. AUTOCORRELATION [1] Definition of Autocorrelation (AUTO) 1) Model: y t = x t β + ε t. We say that AUTO exists if cov(ε t,ε s ) 0, t s. ) Assumptions: All of SIC except SIC.3 (the random sample assumption).
More informationStats Review Chapter 14. Mary Stangler Center for Academic Success Revised 8/16
Stats Review Chapter 14 Revised 8/16 Note: This review is meant to highlight basic concepts from the course. It does not cover all concepts presented by your instructor. Refer back to your notes, unit
More information13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity
Outline: Further Issues in Using OLS with Time Series Data 13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process I. Stationary and Weakly Dependent Time Series III. Highly Persistent
More informationSTAT 3A03 Applied Regression With SAS Fall 2017
STAT 3A03 Applied Regression With SAS Fall 2017 Assignment 2 Solution Set Q. 1 I will add subscripts relating to the question part to the parameters and their estimates as well as the errors and residuals.
More informationECON 120C -Fall 2003 PROBLEM SET 2: Suggested Solutions. By substituting f=f0+f1*d and g=g0+... into Model 0 we obtain Model 1:
ECON 120C -Fall 2003 PROBLEM SET 2: Suggested Solutions PART I Session 1 By substituting f=f0+f1*d82+... and g=g0+... into Model 0 we obtain Model 1: ln(q t )=β 0 +β 1 *D82+β 2 *D86+β 3 *ED1+β 4 *ED2+β
More informationStationary and nonstationary variables
Stationary and nonstationary variables Stationary variable: 1. Finite and constant in time expected value: E (y t ) = µ < 2. Finite and constant in time variance: Var (y t ) = σ 2 < 3. Covariance dependent
More informationCovers Chapter 10-12, some of 16, some of 18 in Wooldridge. Regression Analysis with Time Series Data
Covers Chapter 10-12, some of 16, some of 18 in Wooldridge Regression Analysis with Time Series Data Obviously time series data different from cross section in terms of source of variation in x and y temporal
More informationOne-way ANOVA Model Assumptions
One-way ANOVA Model Assumptions STAT:5201 Week 4: Lecture 1 1 / 31 One-way ANOVA: Model Assumptions Consider the single factor model: Y ij = µ + α }{{} i ij iid with ɛ ij N(0, σ 2 ) mean structure random
More informationOkun's Law Testing Using Modern Statistical Data. Ekaterina Kabanova, Ilona V. Tregub
Okun's Law Testing Using Modern Statistical Data Ekaterina Kabanova, Ilona V. Tregub The Finance University under the Government of the Russian Federation International Finance Faculty, Moscow, Russia
More informationLinear Regression with 1 Regressor. Introduction to Econometrics Spring 2012 Ken Simons
Linear Regression with 1 Regressor Introduction to Econometrics Spring 2012 Ken Simons Linear Regression with 1 Regressor 1. The regression equation 2. Estimating the equation 3. Assumptions required for
More informationMotivation for multiple regression
Motivation for multiple regression 1. Simple regression puts all factors other than X in u, and treats them as unobserved. Effectively the simple regression does not account for other factors. 2. The slope
More informationSTA Module 10 Comparing Two Proportions
STA 2023 Module 10 Comparing Two Proportions Learning Objectives Upon completing this module, you should be able to: 1. Perform large-sample inferences (hypothesis test and confidence intervals) to compare
More informationThe Model Building Process Part I: Checking Model Assumptions Best Practice (Version 1.1)
The Model Building Process Part I: Checking Model Assumptions Best Practice (Version 1.1) Authored by: Sarah Burke, PhD Version 1: 31 July 2017 Version 1.1: 24 October 2017 The goal of the STAT T&E COE
More informationØkonomisk Kandidateksamen 2004 (I) Econometrics 2. Rettevejledning
Økonomisk Kandidateksamen 2004 (I) Econometrics 2 Rettevejledning This is a closed-book exam (uden hjælpemidler). Answer all questions! The group of questions 1 to 4 have equal weight. Within each group,
More informationLectures 5 & 6: Hypothesis Testing
Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across
More informationHandout 11: Measurement Error
Handout 11: Measurement Error In which you learn to recognise the consequences for OLS estimation whenever some of the variables you use are not measured as accurately as you might expect. A (potential)
More information