R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

Similar documents
Regression with Time Series Data

Econ Autocorrelation. Sanjaya DeSilva

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

1. Diagnostic (Misspeci cation) Tests: Testing the Assumptions

Distribution of Least Squares

GMM - Generalized Method of Moments

Generalized Least Squares

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A

Vectorautoregressive Model and Cointegration Analysis. Time Series Analysis Dr. Sevtap Kestel 1

Lecture 10 Estimating Nonlinear Regression Models

Wednesday, November 7 Handout: Heteroskedasticity

Solutions to Odd Number Exercises in Chapter 6

Chapter 5. Heterocedastic Models. Introduction to time series (2008) 1

Distribution of Estimates

Comparing Means: t-tests for One Sample & Two Related Samples

Modeling and Forecasting Volatility Autoregressive Conditional Heteroskedasticity Models. Economic Forecasting Anthony Tay Slide 1

Solutions: Wednesday, November 14

Testing the Random Walk Model. i.i.d. ( ) r

Dynamic Models, Autocorrelation and Forecasting

The Effect of Nonzero Autocorrelation Coefficients on the Distributions of Durbin-Watson Test Estimator: Three Autoregressive Models

DEPARTMENT OF STATISTICS

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

How to Deal with Structural Breaks in Practical Cointegration Analysis

Wisconsin Unemployment Rate Forecast Revisited

Properties of Autocorrelated Processes Economics 30331

Stationary Time Series

Chapter 16. Regression with Time Series Data

Cointegration and Implications for Forecasting

Chapter 2. First Order Scalar Equations

Math 10B: Mock Mid II. April 13, 2016

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS

Unit Root Time Series. Univariate random walk

4.1 Other Interpretations of Ridge Regression

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models

Dynamic Econometric Models: Y t = + 0 X t + 1 X t X t k X t-k + e t. A. Autoregressive Model:

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Lecture 6 - Testing Restrictions on the Disturbance Process (References Sections 2.7 and 2.10, Hayashi)

Chapter 11. Heteroskedasticity The Nature of Heteroskedasticity. In Chapter 3 we introduced the linear model (11.1.1)

y = β 1 + β 2 x (11.1.1)

Testing for a Single Factor Model in the Multivariate State Space Framework

The Simple Linear Regression Model: Reporting the Results and Choosing the Functional Form

Forecasting optimally

Robust estimation based on the first- and third-moment restrictions of the power transformation model

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Volatility. Many economic series, and most financial series, display conditional volatility

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H.

Computer Simulates the Effect of Internal Restriction on Residuals in Linear Regression Model with First-order Autoregressive Procedures

Solutions to Exercises in Chapter 12

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

EXERCISES FOR SECTION 1.5

Vehicle Arrival Models : Headway

(10) (a) Derive and plot the spectrum of y. Discuss how the seasonality in the process is evident in spectrum.

OBJECTIVES OF TIME SERIES ANALYSIS

CHAPTER 17: DYNAMIC ECONOMETRIC MODELS: AUTOREGRESSIVE AND DISTRIBUTED-LAG MODELS

Lecture 5. Time series: ECM. Bernardina Algieri Department Economics, Statistics and Finance

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

References are appeared in the last slide. Last update: (1393/08/19)

Lecture 4. Classical Linear Regression Model: Overview

Stability. Coefficients may change over time. Evolution of the economy Policy changes

Økonomisk Kandidateksamen 2005(II) Econometrics 2. Solution

14 Autoregressive Moving Average Models

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Methodology. -ratios are biased and that the appropriate critical values have to be increased by an amount. that depends on the sample size.

13.3 Term structure models

A note on spurious regressions between stationary series

ECON 482 / WH Hong Time Series Data Analysis 1. The Nature of Time Series Data. Example of time series data (inflation and unemployment rates)

STATE-SPACE MODELLING. A mass balance across the tank gives:

The consumption-based determinants of the term structure of discount rates: Corrigendum. Christian Gollier 1 Toulouse School of Economics March 2012

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

Innova Junior College H2 Mathematics JC2 Preliminary Examinations Paper 2 Solutions 0 (*)

The Arcsine Distribution

Excel-Based Solution Method For The Optimal Policy Of The Hadley And Whittin s Exact Model With Arma Demand

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

(a) Set up the least squares estimation procedure for this problem, which will consist in minimizing the sum of squared residuals. 2 t.

20. Applications of the Genetic-Drift Model

Department of Economics East Carolina University Greenville, NC Phone: Fax:

Time series Decomposition method

Robust critical values for unit root tests for series with conditional heteroscedasticity errors: An application of the simple NoVaS transformation

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

Challenge Problems. DIS 203 and 210. March 6, (e 2) k. k(k + 2). k=1. f(x) = k(k + 2) = 1 x k

The General Linear Test in the Ridge Regression

Linear Response Theory: The connection between QFT and experiments

The Overlapping Data Problem

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

Lecture Notes 2. The Hilbert Space Approach to Time Series

III. Module 3. Empirical and Theoretical Techniques

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

THE BERNOULLI NUMBERS. t k. = lim. = lim = 1, d t B 1 = lim. 1+e t te t = lim t 0 (e t 1) 2. = lim = 1 2.

3.1 More on model selection

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

Hypothesis Testing in the Classical Normal Linear Regression Model. 1. Components of Hypothesis Tests

Two Coupled Oscillators / Normal Modes

The Multiple Regression Model: Hypothesis Tests and the Use of Nonsample Information

BOX-JENKINS MODEL NOTATION. The Box-Jenkins ARMA(p,q) model is denoted by the equation. pwhile the moving average (MA) part of the model is θ1at

Transcription:

Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P, R) = E(v P, R) = 0 { σ v for = s, 0 for s. If we assume homoskedasic disurbances for v we mus assume heeroskedasisiy for u and w. Noe: The oher wo disurbances u and w will in his model have a srucure ha makes hem proporional wih P and R respecively. However his does no necessarily violae he classical assumpion ha heir condiional expecaions are zero. Esimaion of he marginal propensiy o consume β: OLS on (b) gives BLUE/ MVLUE esimaors, however we mus use GLS on (a) and (c). In (a) assume pure heeroskedasisiy: E(u P, R) = 0

var(u P, R) = σu (R ) cov(u, u s P, R) = 0 j s Muliply (a) by consan R so ha (a) is ransformed ino (b). Apply OLS on (b). Money illusion: Define money illusion as: El p C + El R C = δc P δp C + δc R δr C = α P C + β R C = C = αp + βr equaion (b) By including a consan erm β 0 in he regression we can run a - es o es if he consan is equal o zero: C = β 0 + αp + βr H 0 : β 0 = 0 No money illusion. H : β 0 0 Rejec H 0.

EXERCISE 9 (a) Describe one of more mehods which you would find useful in invesigaing wheher he disurbances u or v or w in models (a) (c) in Exercise 7 exhibi heeroskedasiciy. Visual inspecion As a saring poin i is ofen a good idea o perform a visual inspecion by simply ploing he residuals from he regression (i.e., he difference beween he dependen variable and is fied value). Alhough his is no a formal es ha enables us o for example rejec he hypohesis of homoskedasic errors a he 5 % significance level, i is sill a convenien way o find ou wheher he errors suffer (srongly) from heeroskedasiciy or no. Figure 9a and b give examples of homoskedasic and heeroskedasic errors, respecively. Figure 9a Figure 9b Whie Tes Esimae he model by ordinary leas squares (OLS) and obain he residuals. Then run anoher regression where you regress he squared residuals on he independen variables from he original equaion, heir squared values and he cross producs of he regressors. From exercise 7 we know ha one way o formulae he consumpion funcion is as follows: b ) : C = α P + βr + v. ( This means ha he Whie es will be based on an esimaion of û 0 + δ R + δ 3P + δ 4R + 5P R + = δ + δ P δ error.

An assumpion of homoskedasic errors means ha δ,...,δ 5 in his auxiliary regression are all equal o zero. This hypohesis can easily be esed by an F-es. The F-saisic will be disribued as an F random variable wih 5 and T 6 degrees of freedom under he null hypohesis of homoskedasic disurbances (where T is he number of observaions in he sample). Alernaively one may compue he Lagrange muliplier (LM) saisic, which equals he produc of T and R (i.e., he coefficien of deerminaion from he auxiliary regression). The LM-saisic is disribued as χ wih degrees of freedom equal o he number of esimaed parameers in he auxiliary regression minus one (i.e., 5 in our example). These ess are asympoically equivalen. Auoregressive Condiional Heeroskedasiciy A perhaps more sophisicaed es ha includes boh he possibiliy of heeroskedasiciy and auocorrelaion (see below) being presen a he same ime, is he so-called auoregressive condiional heeroskedasiciy es (ARCH) by Nobel Memorial Prize winner Rober F. Engle. For insance, he error variance may follow a simple auoregressive process of order one: Var [ u u ] α α u. = 0 + The hypohesis of homoskedasic errors can hen be esed by running he following auxiliary regression (where he residuals, heir squared values and is firs lag are obained from he original equaion): û α + α û + v. = 0 Convenional - and F-ess may hen be applied o es he significance of he coefficien(s) on he lagged residual(s). (b) Describe one or more mehods ha you would find useful in order o invesigae wheher he disurbance of he model you have chosen in Exercise 7, exhibis auocorrelaion. Visual inspecion As wih heeroskedasiciy, a convenien way o sar is by simply ploing he residuals agains ime. We know ha disurbances are serially uncorrelaed in he classical regression

model. Inuiively his means ha high error values (i.e., above average) a ime will no also (in expecaion) be associaed wih high disurbance values a ime +, +,.... Auocorrelaion is herefore likely a problem ha needs o be addressed whenever we spo residuals ha display some kind of persisence across ime. This approach is mos helpful whenever he residuals are posiively correlaed (which forunaely is also mos ofen he case), as i is a bi more complicaed o acually spo negaiv auocorrelaion. Durbin-Wason es The mos well-known es for auocorrelaion, provided by nearly all compuer regression programs, is he Durbin-Wason (DW) es. DW ess he hypohesis of no serial correlaion agains he alernaive hypohesis of firs-order auocorrelaion in he errors. The alernaive hypohesis may hus be wrien as: u = ρ + ε, ρ <, u where ε is a classical error wih zero mean and consan variance. Mahemaically, he DW es saisic is defined as follows: DW T ( û = = T = û û ). By doing some manipulaions, one can derive he following relaionship beween DW and ρ (i.e., he esimae of ρ from u ρ + ε : = u DW ( ρ). As usual we sar by esimaing our original equaion (b) by OLS. Obain he residuals, heir squared values and heir lags. Then form he DW saisic. Unforunaely, here are no ables giving he criical values of he DW saisic under he null. This is because he exac disribuion of his saisic depends on he X marix (i.e., on he paricular observaions on he regressors). However, he acual disribuion can be shown o lie beween wo limiing disribuions for which criical values are acually available, as shown in figure 9c.

As he DW saisic moves furher away from.0, he less likely i is ha he errors do no suffer from serial correlaion. As we can see from he graph, here is one serious drawback wih his es. The es is considered inconclusive whenever DW falls beween A and B or C and D. There is also evidence ha he es is biased owards accepance of he hypohesis of no auocorrelaion whenever a lagged dependen variable appears as one of he explanaory variables in he model. Probabiliy densiy Figure 9c Lower disribuion Upper disribuion 0 A B.0 C D DW Hypohesis of no auocorrelaion rejeced Hypohesis of no auocorrelaion no rejeced Hypohesis of no auocorrelaion rejeced AR(q) Serial Correlaion es The DW es is a somewha oudaed auocorrelaion es. A simpler and more flexible approach is as follows. Sar by esimaing he equaion by OLS and save he residuals and is lagged values. Then run an auxiliary regression of u on all he regressors in he original equaion plus he lagged values of he residuals, i.e.: û on X û,...,, = ( q + ),..., T,, û q where X is a marix of all independen variables for all ime periods. As we can see his es is more flexible han he DW es, as i allows us o include an arbirary number of lags of he residual (i.e., our null is no longer resriced o firs-order serial correlaion). The null

hypohesis of no auocorrelaion can hen be esed by an F-es for join significance of û,..., û q. EXERCISE 0 Explain he differences beween he Generalized Leas Squares (GLS) and he Feasible Generalized Leas Squares (FGLS) mehods. Explain how you would proceed o apply he laer mehod in a specific siuaion. Wha can you say abou he properies of he FGLS mehod? In order o use Generalized Leas Squares (GLS), he covariance marix mus be know, somehing ha ofen is no he case. When he covariance marix is no known bu mus be esimaed, one can use Feasible Generalized Leas Squares insead of GLS. When using FGLS, one mus firs perform OLS on he regression equaions. The residuals found are used o form esimaors of elemens in he covariance marix. GLS is hen performed wih hese esimaors. The FGLS mehod is, like GLS, a consisen esimaor. However, in small samples he esimaors of he covariances can differ from he rue covariances. The more one knows abou he srucure of he heeroskedasiciy, he closer FGLS comes o he ideal properies of GLS, and hus he closer i is o being BLUE.