Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Similar documents
1 Linear Difference Equations

The Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Forecasting with ARMA

Introduction to Stochastic processes

Discrete time processes

Univariate Time Series Analysis; ARIMA Models

ECON 616: Lecture 1: Time Series Basics

Università di Pavia. Forecasting. Eduardo Rossi

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

Introduction to ARMA and GARCH processes

Basic concepts and terminology: AR, MA and ARMA processes

Ch. 14 Stationary ARMA Process

Empirical Market Microstructure Analysis (EMMA)

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Class 1: Stationary Time Series Analysis

ECOM 009 Macroeconomics B. Lecture 2

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Some Time-Series Models

11. Further Issues in Using OLS with TS Data

3. ARMA Modeling. Now: Important class of stationary processes

Non-Stationary Time Series and Unit Root Testing

Chapter 4: Models for Stationary Time Series

Non-Stationary Time Series and Unit Root Testing

7. MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES

Autoregressive and Moving-Average Models

Trend-Cycle Decompositions

Lesson 9: Autoregressive-Moving Average (ARMA) models

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

Covariances of ARMA Processes

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Lecture 2. (1) Permanent Income Hypothesis (2) Precautionary Savings. Erick Sager. February 6, 2018

Non-Stationary Time Series and Unit Root Testing

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

ECON/FIN 250: Forecasting in Finance and Economics: Section 7: Unit Roots & Dickey-Fuller Tests

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

Econ 424 Time Series Concepts

Graduate Macroeconomics 2 Problem set Solutions

1 Class Organization. 2 Introduction

Stationary Stochastic Time Series Models

Midterm Suggested Solutions

Nonlinear time series

Univariate Nonstationary Time Series 1

Chapter 2: Unit Roots

Lecture 2: Univariate Time Series

Time Series 2. Robert Almgren. Sept. 21, 2009

Notes on Time Series Modeling

Time Series Analysis -- An Introduction -- AMS 586

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Class: Trend-Cycle Decomposition

Lecture 4a: ARMA Model

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Autoregressive Moving Average (ARMA) Models and their Practical Applications

3 Theory of stationary random processes

Single Equation Linear GMM with Serially Correlated Moment Conditions

Ch. 19 Models of Nonstationary Time Series

Heteroskedasticity in Time Series

Time Series Analysis Fall 2008

Forecasting and Estimation

Ch 4. Models For Stationary Time Series. Time Series Analysis

Lecture 2. (1) Aggregation (2) Permanent Income Hypothesis. Erick Sager. September 14, 2015

Statistics of stochastic processes

LINEAR STOCHASTIC MODELS

2. An Introduction to Moving Average Models and ARMA Models

FE570 Financial Markets and Trading. Stevens Institute of Technology

Econometría 2: Análisis de series de Tiempo

Lecture note 2 considered the statistical analysis of regression models for time

at least 50 and preferably 100 observations should be available to build a proper model

Lecture on ARMA model

Lecture 2: ARMA(p,q) models (part 2)

Chapter 3 - Temporal processes

ARMA (and ARIMA) models are often expressed in backshift notation.

Time Series 3. Robert Almgren. Sept. 28, 2009

Econometrics II Heij et al. Chapter 7.1

Multivariate ARMA Processes

Identifiability, Invertibility

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

Cointegration, Stationarity and Error Correction Models.

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

This note introduces some key concepts in time series econometrics. First, we

1. Fundamental concepts

7. Forecasting with ARIMA models

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

1 Teaching notes on structural VARs.

E 4101/5101 Lecture 6: Spectral analysis

Multivariate Time Series: VAR(p) Processes and Models

ECONOMETRICS Part II PhD LBS

Empirical Macroeconomics

B y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Advanced Econometrics

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Econometría 2: Análisis de series de Tiempo

Financial Time Series Analysis: Part II

7. Integrated Processes

Lecture 1: Stationary Time Series Analysis

Advanced Econometrics

We use the centered realization z t z in the computation. Also used in computing sample autocovariances and autocorrelations.

Transcription:

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko 1 / 36

The PIH Utility function is quadratic, u(c t ) = 1 2 (c t c) 2 ; borrowing/saving is allowed using only the risk-free bond; β(1 + r) = 1. The time t budget constraint is βb t+1 + c t = b t + y t The lifetime budget constraint (imposing lim T + β T b T +1 = 0): (b.c.) j=0 ( ) 1 j c t+j = b t + 1 + r j=0 At time t, the PIH consumer will solve max {c t+j, b t+j+1 } j=0 ( ) 1 j y t+j. 1 + r E t β j u(c t+j ) j=0 subject to the sequence of (b.c.) 2 / 36

Stochastic Euler equation: E t u (c t+1 ) = β(1 + r)u (c t ), E t c t+1 = c t, or E t c t+1 = 0. Consumption is said to be a martingale. 3 / 36

Excess sensitivity A natural way to test the martingale hypothesis is by running the following regression c t+1 = α + βx t + error, and testing if β = 0. X t is a variable known at t. For example, X t = {y t, y t } lagged income or lagged income change/growth. If β is estimated to be statistically different from 0, consumption is said to be excessively sensitive to lagged information (income). Such tests are called the excess sensitivity tests of consumption. 4 / 36

Euler equations The Euler equations for periods t and t + 1 are E t c t+1 = c t, E t+1 c t+2 = c t+1. Apply the time-t conditional expectation operator to both sides of the last equation to obtain E t E t+1 c t+2 = E t c t+1 E t c t+2 = c t. Following similar steps, we can show that E t c t+k = c t, for all k 1. 5 / 36

The lifetime budget constraint can be written as: or ( ) 1 j ( ) c t+j = b t + E t 1 j y t+j, 1 + r 1 + r E t Thus, c t j=0 ( ) 1 j = b t + E t 1 + r j=0 j=0 j=0 ( ) 1 j y t+j. 1 + r c t = r ( ) 1 j b t + E t y t+j. 1 + r 1 + r j=0 }{{} permanent income 6 / 36

At time t 1, c t 1 = r 1 + r b t 1 + r 1 + r E t 1 [ y t 1 + 1 1 + r y t + = r 1 + r b t 1 + r 1 + r y r t 1 + (1 + r) 2 E t 1 c t 1 (1 + r) = rb t 1 + ry t 1 + r } 1 {{ + r } =1 β j=0 E t 1 j=0 c t 1 = ( rb t 1 + ry t 1 rc }{{ t 1 ) + (1 β)e } t 1 = r 1+r bt= (1 β)bt ] 1 (1 + r) 2 y t+1 +... 1 (1 + r) j y t+j 1 (1 + r) j y t+j j=0 1 (1 + r) j y t+j Subtracting the result from c t on the previous slide, we obtain c t = (1 β) E t β j y t+j E t 1 β j y t+j j=0 j=0 7 / 36

c t = r ( ) 1 j (E t E t 1)y t+j, 1 + r 1 + r j=0 }{{} innovation in the permanent income where (E t E t 1 )x t+k = E t x t+k E t 1 x t+k. To understand more about the magnitude of consumption changes, we need to have some idea about the properties of a (stochastic) process of income {y t }. 8 / 36

Time series A time series is a collection of observations y t, each recorded at time t. We will talk of a time series as a collection of realizations of random variables Y t. The time of recording an observation belongs to some set T 0. If T 0 is a discrete set, a time series is called discrete (daily, monthly, annual time series are examples of discrete time series). 9 / 36

Sample path In data, we observe a sample path of Y t : e.g., y 1, y 2,..., y T. We want to model the observed time series as a realization of a stochastic process Y t, t = 1, 2,..., T, keeping in mind that the process could have started before t = 1 and could run after t = T (for example, Y t can be recorded at t = 0, ±1, ±2,...). 10 / 36

Auto-covariance function We want to construct a mathematical/statistical model that would describe the data we observe. For a single time series, the dependence between Y t and Y t, Y t±1, Y t±2, etc., is described by the auto-covariance function. The auto-covariance function γ(, ) for a stochastic process {Y t, t T } is defined by γ(i, j) = E [(Y i EY i )(Y j EY j )], i, j T. γ(i, i) the variance of Y at time i, γ(i, i + 1) the auto-covariance between Y s recorded at time i and i + 1, etc. In general, those quantities can be time-dependent. 11 / 36

Weak stationarity A stochastic process {Y t, t = 0, ±1, ±2,...} is weakly stationary if EYt 2 <, t EY t = µ, t γ(i, j) = γ(i + t, j + t) = γ(i j) = γ(j i), i, j, t = 0, ±1, ±2,.... 12 / 36

If, e.g. i j = 1, for a weakly-stationary process γ(1) can be calculated as E [(Y t EY t )(Y t 1 EY t 1 )] = E [(Y t EY t )(Y t+1 EY t+1 )], t. Similarly, γ(2) is γ(2) = E [(Y t EY t )(Y t 2 EY t 2 )] = E [(Y t EY t )(Y t+2 EY t+2 )], etc. 13 / 36

If you have a model for Y t, the mean, variances and auto-covariances can be estimated by simulating the model S times for t = 0, ±1, ±2,... and taking the average across S simulations. For example, EY t = 1 S EY 2 t = 1 S S yt s, s=1 S (yt s ) 2, where y s t is the value assumed by Y at time t in simulation s. In real data, we do not have the luxury of observing the process repeatedly but we can infer the mean, variances, and auto-covariances of the process by calculating sample analogs of population moments. s=1 14 / 36

Random walk Most of observed time series are not stationary. An example is a random walk. Random walk can be described by a process Y t = Y t 1 + u t, where u t iid(0, σ 2 ), t = 1,..., T, and Y 0 = 0. Note that Y t = t j=1 u j and EY 2 t = tσ 2. Random walk is not covariance-stationary since it violates one of the conditions of weak stationarity (in this case, finite variance). Y t (= Y t Y t 1 ), however, is covariance-stationary. 15 / 36

For y 1, y 2,, y T, a sample path of a stationary process Y t, we can estimate the sample auto-covariance function. The sample auto-covariance function is defined by: ˆγ(k) = 1 T T k (y j+k y)(y j y), 0 k < T, j=1 where ˆγ(k) = ˆγ( k), T < k 0, and y = 1 T T y t. t=1 16 / 36

ARMA processes Stationary ARMA processes specify Y t as a function of current and past realizations of white noise. A stochastic process, U t, is called white noise (W N) with mean zero and variance σ 2 if γ(0) = σ 2 γ(k) = 0, k 0. 17 / 36

An ARMA(p,q) process is described, for each t = 0, ±1, ±2,..., by the following equation: Y t φ 1 Y t 1 φ 2 Y t 2 φ p Y t p = U t + θ 1 U t 1 + θ 2 U t 2 + where U t W N(0, σ 2 ). + θ q U t q, 18 / 36

In a more compact notation, a mean zero ARMA(p,q) process is defined by φ(l)y t = θ(l)u t, where φ(l) = φ 0 L 0 φ 1 L φ 2 L 2... φ p L p, θ(l) = θ 0 L 0 + θ 1 L + θ 2 L 2 +... + θ q L q, θ 0 1, φ 0 1, and L is the lag operator so that L k x t = x t k, k = 0, ±1, ±2,.... 19 / 36

An MA(q) process is obtained by setting φ(l) 1: Y t = θ(l)u t, where θ(l) = 1 + θ 1 L + θ 2 L 2 +... + θ q L q. Similarly, an AR(p) process is obtained by setting θ(l) 1: φ(l)y t = U t. 20 / 36

An ARMA(p,q) process is said to be stationary if the roots of the AR polynomial, 1 φ 1 z φ 2 z 2... φ p z p = 0, are greater than 1 in modulus, i.e., lie outside the unit circle. E.g., an AR(1) process: (1 φl)y t = U t. It is stationary if the root of 1 φz = 0 is greater than 1 in absolute value. It happens if z = φ 1 > 1, or if φ < 1. We already know that if φ is equal to 1, the process is not covariance-stationary; the same applies to all AR(1) processes with φ > 1. 21 / 36

We can express an AR(1) process as Y t = U t (1 φl) = (1 + φl + φ2 L 2 + φ 3 L 3 +...)U t = U t + φu t 1 + φ 2 U t 2 + φ 3 U t 3 +.... Thus, an AR(1) process can be represented by an MA process of infinite order with particular restrictions on the moving average coefficients. 22 / 36

An ARMA(p,q) process is said to be invertible if the roots of the MA polynomial, 1 + θ 1 z + θ 2 z 2 +... + θ q z q = 0 are greater than 1 in modulus, i.e., lie outside the unit circle. For an MA(1) process, this means that θ < 1. 23 / 36

Note also that an MA(1) process can be expressed as or Y t = (1 + θl)u t = (1 ( θ)l)u t Y t (1 ( θ)l) = U t, or (1 + ( θ)l + ( θ) 2 L 2 + ( θ) 3 L 3 +...)Y t = U t, Y t = θy t 1 θ 2 Y t 2 + θ 3 Y t 3... + U t = ( θ) j Y t j + U t. That is, an invertible MA(1) process can be represented by an AR process of infinite order. j=1 24 / 36

Auto-covariance function for an AR(1) process If (1 φl)y t = U t, then ψ j = φ j, j = 0,...,. γ(0) = σ 2 (1 + φ 2 + φ 4 + φ 6 +...) = σ2 1 φ 2 γ(1) = σ 2 φ(1 + φ 2 + φ 4 + φ 6 +...) = φ 1 φ 2 γ(k) = φγ(k 1), k 1. σ2 25 / 36

Auto-covariance function for an MA(1) process If Y t = (1 + θl)u t, ψ 0 = 1, ψ 1 = θ, ψ j = 0, j > 1. γ(0) = σ 2 (1 + θ 2 ) γ(1) = θσ 2 γ(k) = 0, k > 1. If the process is an MA(q), the auto-covariance function is zero for k > q. 26 / 36

Back to the PIH c t = r ( ) 1 j (E t E t 1)y t+j. 1 + r 1 + r j=0 }{{} innovation in the permanent income We need to know (E t E t 1 )y t, (E t E t 1 )y t+1, (E t E t 1 )y t+2,.... 27 / 36

Assume that the process is a covariance-stationary MA( ) process: Then, Y t = θ(l)u t = U t + θ 1 U t 1 + θ 2 U t 2 + θ 3 U t 3 +.... (E t E t 1 )y t = u t (E t E t 1 )y t+1 = θ 1 u t (E t E t 1 )y t+2 = θ 2 u t (E t E t 1 )y t+3 = θ 3 u t. 28 / 36

For this income process, c t = r [ 1 + r = r 1 + r u t ] θ 3 (1 + r) 3 u t +... ] u t + θ 1 1 + r u θ 2 t + (1 + r) 2 u t + [ 1 + θ 1 1 + r + θ 2 (1 + r) 2 + θ 3 (1 + r) 3 +... Note that 1 + θ 1 1+r + θ 2 + θ (1+r) 2 3 +... = θ(l) (1+r) 3 L 1. 1+r Thus, c t = [ ( )] r 1 θ 1 + r 1 + r }{{} MPC out of the shock u t. 29 / 36

If Y t is some general ARMA(p,q) process, φ(l)y t = θ(l)u t so that Y t = θ(l) φ(l) U t, it can be shown that ) c t = ( r θ 1 1 + r 1+r φ ( 1 1+r )u t. Note that the polynomial φ(l) may have a unit root. 30 / 36

Excess smoothness Aggregate income in macro data is well fit by the following model y t = µ + α y t 1 + u t (1 L)y t = µ + α(1 L)Ly t + u t (1 L)(1 αl)y t = µ + u t µ y t = (1 L)(1 αl) + u t (1 L)(1 αl) u t y t = µ + (1 L)(1 αl). Utilizing our formula for consumption changes in accordance with the PIH c t = r 1 + r ( 1 1 1+r 1 ) ( 1 α 1+r )u t = 1 + r 1 + r α u t. 31 / 36

Excess smoothness, contd. Note that α is estimated to be positive in the aggregate data. Thus, the MPC out of the shock to aggregate income should be greater than 1. Implications for the variances are such that var PIH ( c t ) = ( ) 1 + r 2 σu 2 > σ 2 1 + r α u. The variance of the innovation to consumption change should be larger than the variance in the innovation to income. In the data, the reverse is true. This result is known as the excess smoothness of consumption. 32 / 36

PIH and structural income processes Let s assume that the income process consists of two components the permanent component and the transitory component. y t = τ t + w t τ t = µ + τ t 1 + u P t w t = θ(l)u T t, where τ t is the permanent (in macro: trend ) component, u P t is the permanent shock; w t is the mean-reverting, stationary component (in macro: cycle ), u T t is the transitory shock. u P t iid(0, σ 2 u P ), u T t iid(0, σ 2 u T ), and u T t, u P t are uncorrelated at all leads and lags. 33 / 36

We assume that a consumer is able to differentiate between the permanent shocks (e.g., due to such events as promotion/demotion, permanent disability, etc.) and transitory shocks (e.g., those emanating from temporary sickness, short spells of unemployment, bonuses, overtime, etc.) to his income. We want to predict the consumer s reaction to these distinct shocks assuming the PIH is true. 34 / 36

We can express income in terms of current and past shocks as: (1 L)y t = τ t + w t = µ + u P t + (1 L)θ(L)u T t, or where µ = (1 L) 1 µ. The PIH implies: c t = y t = µ + (1 L) 1 u P t + θ(l)u T t, r 1 + r [ ( 1 1 ) 1 ( ) ] 1 u P t + θ u T t, 1 + r 1 + r where ( ) θ(l) = 1 + θ 1 L + θ 2 L 2 +..., and θ 1 1+r = 1 + θ 1 1+r + θ 2 +.... (1+r) 2 35 / 36

In micro data, y t MA(2) (e.g., Abowd and Card 1989). Thus, w t MA(1) and θ(l) = 1 + θl. It follows that c t = u P t + r 1 + r ( 1 + θ ) u T t = u P t + r 1 + r 1 + r ( 1 + r + θ 1 + r Meghir and Pistaferri (2004): ˆθ [0.17, 0.5101]. Assume r = 0.02. r 1+r ( ) 1+r+θ 1+r 0.023; for θ = 0.5101, For θ = 0.17, ) 0.029 a very tiny fraction. r 1+r ( 1+r+θ 1+r ) u T t. Thus, the MPC out of the permanent shock is 1 and the MPC out the transitory shock is at most 0.03 (3 cents per $1.) 36 / 36