Modelling Electricity Demand in New Zealand

Similar documents
ECON 4160, Lecture 11 and 12

ECON 4160, Spring term Lecture 12

Introduction to Algorithmic Trading Strategies Lecture 3

TESTING FOR CO-INTEGRATION

ARDL Cointegration Tests for Beginner

Oil price and macroeconomy in Russia. Abstract

Vector error correction model, VECM Cointegrated VAR

Cointegration Lecture I: Introduction

EC408 Topics in Applied Econometrics. B Fingleton, Dept of Economics, Strathclyde University

BCT Lecture 3. Lukas Vacha.

Multivariate Time Series: Part 4

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

CHAPTER 21: TIME SERIES ECONOMETRICS: SOME BASIC CONCEPTS

Government expense, Consumer Price Index and Economic Growth in Cameroon

Are US Output Expectations Unbiased? A Cointegrated VAR Analysis in Real Time

Stationarity and Cointegration analysis. Tinashe Bvirindi

7. Integrated Processes

Testing for Unit Roots with Cointegrated Data

Questions and Answers on Unit Roots, Cointegration, VARs and VECMs

THE LONG-RUN DETERMINANTS OF MONEY DEMAND IN SLOVAKIA MARTIN LUKÁČIK - ADRIANA LUKÁČIKOVÁ - KAROL SZOMOLÁNYI

7. Integrated Processes

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

The causal relationship between energy consumption and GDP in Turkey

Multivariate Time Series: VAR(p) Processes and Models

A Horse-Race Contest of Selected Economic Indicators & Their Potential Prediction Abilities on GDP

Advanced Econometrics

Lecture 8a: Spurious Regression

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

Non-Stationary Time Series, Cointegration, and Spurious Regression

Econometrics of Panel Data

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Market efficiency of the bitcoin exchange rate: applications to. U.S. dollar and Euro

Stationarity and cointegration tests: Comparison of Engle - Granger and Johansen methodologies

Bristol Business School

A Test of Cointegration Rank Based Title Component Analysis.

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test

TESTING FOR CO-INTEGRATION PcGive and EViews 1

ESTIMATION OF BEHAVIOURAL EQUATIONS: COINTEGRATION ANALYSIS IN ECONOMETRIC MODELLING

Cointegration and Tests of Purchasing Parity Anthony Mac Guinness- Senior Sophister

Inflation Revisited: New Evidence from Modified Unit Root Tests

Lecture 8a: Spurious Regression

11/18/2008. So run regression in first differences to examine association. 18 November November November 2008

Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem

VAR Models and Cointegration 1

1 Regression with Time Series Variables

10. Time series regression and forecasting

Econometría 2: Análisis de series de Tiempo

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

THE TOURIST DEMAND IN THE AREA OF EPIRUS THROUGH COINTEGRATION ANALYSIS

Population Growth and Economic Development: Test for Causality

Covers Chapter 10-12, some of 16, some of 18 in Wooldridge. Regression Analysis with Time Series Data

An Econometric Modeling for India s Imports and exports during

A nonparametric test for seasonal unit roots

Time Series Methods. Sanjaya Desilva

Introductory Workshop on Time Series Analysis. Sara McLaughlin Mitchell Department of Political Science University of Iowa

10) Time series econometrics

Christopher Dougherty London School of Economics and Political Science

Bristol Business School

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms

Nonstationary Time Series:

Empirical Market Microstructure Analysis (EMMA)

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity

Modeling Long-Run Relationships

Financial Econometrics

Introduction to Eco n o m et rics

Trending Models in the Data

9) Time series econometrics

This chapter reviews properties of regression estimators and test statistics based on

Testing for non-stationarity

Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Volatility. Gerald P. Dwyer. February Clemson University

Cointegration and Error Correction Exercise Class, Econometrics II

Lecture 2: Univariate Time Series

Lecture 6a: Unit Root and ARIMA Models

On Consistency of Tests for Stationarity in Autoregressive and Moving Average Models of Different Orders

Trends and Unit Roots in Greek Real Money Supply, Real GDP and Nominal Interest Rate

Ch 6. Model Specification. Time Series Analysis

Introduction to Econometrics

CHAPTER III RESEARCH METHODOLOGY. trade balance performance of selected ASEAN-5 countries and exchange rate

Testing methodology. It often the case that we try to determine the form of the model on the basis of data

Economics 618B: Time Series Analysis Department of Economics State University of New York at Binghamton

Bootstrapping the Grainger Causality Test With Integrated Data

EC821: Time Series Econometrics, Spring 2003 Notes Section 9 Panel Unit Root Tests Avariety of procedures for the analysis of unit roots in a panel

11. Further Issues in Using OLS with TS Data

Chapter 2: Unit Roots

E 4101/5101 Lecture 9: Non-stationarity

Darmstadt Discussion Papers in Economics

Non-Stationary Time Series and Unit Root Testing

Modelling Seasonality of Gross Domestic Product in Belgium

ENERGY CONSUMPTION AND ECONOMIC GROWTH IN SWEDEN: A LEVERAGED BOOTSTRAP APPROACH, ( ) HATEMI-J, Abdulnasser * IRANDOUST, Manuchehr

Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point

Cointegration, Stationarity and Error Correction Models.

Modelling of Economic Time Series and the Method of Cointegration

Transcription:

Modelling Electricity Demand in New Zealand Market performance enquiry 14 April 2014 Market Performance

Version control Version Date amended Comments 1.0 15 April 2014 1 st draft i 20 March 2015 12.39 p.m.

Investigation stages An in-depth investigation will typically be the final step of a sequence of escalating investigation stages. The investigations are targeted at gathering sufficient information to decide whether a Code amendment or market facilitation measure should be considered. Market Performance Enquiry (Stage I): At the first stage, routine monitoring results in the identification of circumstances that require follow-up. This stage may entail the design of low-cost ad hoc analysis, using existing data and resources, to better characterise and understand what has been observed. The Authority would not usually announce it is carrying out this work. This stage may result in no further action being taken if the enquiry is unlikely to have any implications for the competitive, reliable and efficient operation of the electricity industry. In this case, the Authority publishes its enquiry only if the matter is likely to be of interest to industry participants. Market Performance Review (Stage II): A second stage of investigation occurs if there is insufficient information available to understand the issue and it could be significant for the competitive, reliable or efficient operation of the electricity industry. Relatively informal requests for information are made to relevant service providers and industry participants. There is typically a period of iterative informationgathering and analysis. The Authority would usually publish the results of these reviews but would not announce it is undertaking this work unless a high level of stakeholder or media interest was evident. Market Performance Formal Investigation (Stage III): The Authority may exercise statutory informationgathering powers under section 46 of the Act to acquire the information it needs to fully investigate an issue. The Authority would generally announce early in the process that it is undertaking the investigation and indicate when it expects to complete the work. Draft reports will go to the Board of the Authority for publication approval. The outcome of any of the three stages of investigation can be either a recommendation for a Code amendment, provision of information to a Code amendment process already underway, a brief report provided to industry as a market facilitation measure, or no further action. From the point of view of participants, repeated information requests are generally concerned with Stage II; trying to understand the issue to such an extent that a decision can be made about materiality. ii 20 March 2015 12.39 p.m.

Contents 1 Introduction 4 2 Literature Survey 5 3 General Model 5 4 Methodology 6 Unit Root 7 VAR 8 Cointegration 9 Johansen Tests 9 Pesaran and Shin ARDL bounds testing approach 10 Weak Exogeneity 11 VECM 12 Deterministic Elements 14 Lag Length 14 5 Overview of Data 15 6 The results of empirical analysis 17 Integration 17 The VAR System 19 The Cointegration Analysis 21 The Error Correction model 24 References 30 iii 20 March 2015 12.39 p.m.

1 Introduction 1.1 This paper presents modelling of electricity demand in New Zealand. Electricity demand in New Zealand has been experiencing flattening demand in recent years. A general-to-specific approach is adopted and the electricity demand is modelled as a function of real income, electricity price, population, price of natural gas, weather, unemployment, and budget share of electricity. Long run equilibrium relationship is examined using cointegration analysis and the error correction model is built to capture short-run dynamics and for forecasting. 1.2 The variables concerned are well modelled as stochastic trends, i.e., integrated of order 1, or I(1). Running a regression on nonstationary variables can result in spurious regression. Simple first differencing of the data will remove the nonstationarity problem but also remove information on long run relationships in the process, resulting in a loss of generality regarding the long-run equilibrium relationships among the variables. Cointegration technique solves this filtering problem. If all or a subset of the variables are I(1), there may exist a linear combination of the variables which is stationary, I(0). Such stationary linear combinations indicate common stochastic trends, i.e. cointegration. The linear combination then expresses a long-run equilibrium relationship between the variables concerned and therefore, according to the Granger representation theorem (Engle and Granger, 1987) can be characterized as being generated through an error correction mechanism. Before the 1980s many economists used linear regressions on detrended non-stationary time series data. Granger showed this to be a dangerous approach since standard detrending techniques can result in data that are still non-stationary and could produce spurious correlation. 1.3 A range of methodology for cointegration analysis is looked at: the paper starts with Static Granger-Engle procedure, Johansen maximum likelihood based cointegration tests, and Pesaran et al methodology allowing for heterogenous order of integration. Recent flattening of electricity demand could be a result of structural changes following financial crisis of 2008, population growth, etc. In addition, access to natural gas might be another factor contributing to this. The paper first develops a long-run model using cointegration techniques then goes on to build error correction model of electricity demand for short-run analysis. The long-run relationship yields a negative and inelastic own price elasticity, positive income elasticity, and cross price elasticity with natural gas that is positive, but not linear homogenous. Error correction model incorporates the results of cointegration. 1.4 The next section presents a literature survey of electricity demand studies. Section three outlines the general model used for the analysis and section four describes empirical methodology. The fifth section is an overview of the data used for model estimation and section six presents the results of the empirical analysis. 4 of 31 20 March 2015 12.39 p.m.

2 Literature Survey 2.1 In light of the non-stationarity of many economic variables, Engle and Granger (1987) pioneered the use of cointegration and error correction methodology and they were applied to the forecast of the electricity demand in Engle et al. (1989). The usual statistical inference is invalidated once the regression is run on a set of non-stationary variables. When the variables are cointegrated, a linear combination of the variables is stationary, producing statistically unbiased models. 2.2 The analysis of cointegration in a multivariate framework based on Vector Autoregressive representation is studied by Johansen (1988) and further explored in Hendry and Juselius (2000, 2001). The Johansen maximum likelihood method provides two different likelihood ratio tests to determine the number of cointegrating equations; one based on the trace statistic and the other on the maximum eigenvalue. Cointegration and Error correction models explicitly distinguish between long and short run effects and they have become standard tools in studying and forecasting electricity demand. 2.3 Bentzen and Engsted (1993) use a linear double logarithmic functional form and use income, price and heating degree days as independent variables. In modelling electricity consumption per capita in Canada, Lariviere and Lafrance (1999) suggest that economic activity, demographic characteristics, meteorological factors and urban density are important explanatory variables in determining electricity consumption. Xiaohua and Zhenmin compare the shares of commercial energy consumption across different regions of rural China. The share increases with economic development associated with access to different fuel sources and the construction of rural power supply networks. In Medlock and Soligo (2001), the effect of economic development on sector and intensity of energy end-use is examined. The share of residential energy demand is found to rise below a certain level of income and to fall above that level. Polemis (2007) employs a multivariate cointegration technique and estimates electricity demand function for Greece. The estimated long run price and income elasticities are - 0.85 and 0.85, while in short run they are 0.61 and -0.35, respectively. Joutz and Silk (1997) analyse annual US residential electricity demand using cointegration and error correction models, reporting short and long run elasticities and forecasting demand. Numerous price and income elasticities have been reported and Espey and Espey (2004) report their reviews of these studies. Price elasticities range from -0.076 to -2.01 in the short run and from -0.07 to -2.5 in the long run. They find the short run income elasticities of 0.04 to 3.48 and long run elasticities from 0.02 to 5.74. 3 General Model 3.1 Following general model for electricity demand is formulated: E = f (Income, the price of electricity, Population, the price of natural gas, Weather, Unemployment, Budget Share) where e represents the demand for electricity. Total income is included in the model to measure the consumer s buying power. It determines how much the 5 of 31 20 March 2015 12.39 p.m.

consumer can afford to buy electricity. When the income is low, the consumer must use only the electricity they can afford. Also, those with higher incomes tend to have larger houses than those with lower incomes, using more electricity for heating and cooling needs. Energy demand is highly influenced by weather and climate. With higher income, economic activity and the purchases of electrical equipment increase, raising demand for electricity. With higher income, more consumers purchase air conditioners and fans and more electrical heating devices. The natural gas is a substitute for electricity, and budget share is the relative share of income spent on the consumption of electricity. 3.2 In cases where the electricity is a normal good, the own price elasticity for electricity is expected to be negative and inelastic. The income elasticity is expected to be positive and more inelastic in the short run than in the long run. Higher disposable income increases consumption through greater economic activity and purchases of electricity-using appliances. The cross price elasticity is expected to be positive since natural gas is substitute for electricity. If the price of natural gas increases, the consumers will substitute away from natural gas and increase their demand for electricity. As the share of income of electricity increases, it is likely that the consumption of electricity will decline in the shortrun. 3.3 General-to-specific modelling (Hendry (1986)) is employed and it reveals the local data generating process (DGP). The goal is to discover which alternative theoretical views are tenable and to test them statistically. It characterises the properties of the sample data in simple parametric relationships that remain reasonably constant over time and that are interpretable in an economic sense. The approach sets up a general hypothesis about the relevant explanatory variables and dynamic process (i.e. the lag structure of the model) then the model is narrowed down by testing for simplifications or restrictions on the general model. The following five steps are involved. First, individual data is examined to better understand the time series properties such as trends, patterns, and seasonal effects. The order of integration is examined using diagnostic tests such as unit root and seasonal unit root tests. Second, the Vector Autoregressive Regression system is constructed and optimal lag length is chosen. Stability is tested and the residual diagnostics is run. Third, cointegration analysis is run and if there exist any cointegrated relationships, they are identified. Fourth, weak exogeneity is tested and appropriate cointegrating relationship identified and long-run equilibrium relationship evaluated. Finally, incorporating cointegrating relationships, the error correction model is specified and a reduced form model created through further tests of stability and residual diagnostics. Short-run economic hypothesis testing is carried out. 4 Methodology 4.1 The first step in this analysis is to determine if the variables are stationary in levels or integrated of order d, I(d), i.e., discover how many times the variables need differencing to be stationary. Modelling with non-stationary variables can result in spurious relationships, whereas a combination of non-stationary variables can result in cointegration. Running a regression on non stationary variables can result in nonsense correlation (Yule, 1926), with extremely high 6 of 31 20 March 2015 12.39 p.m.

correlation found between variables for which there is no causal explanation. Yule found that the coefficient of correlation between two variables is almost normally distributed when the variables are stationary, but becomes nearly uniformly distributed when the variables contain a unit root. Statistical inference tools such as Student s t, the F test and the R squared are no longer valid. Unit Root Stationary linear combinations tell us to which level a random-walk like variable should converge given certain levels of the other variables in the cointegration space. Checking for the order of the integration is useful in discerning whether the levels or the first differences should be used in a cointegration and vector error correction analysis. The two tests used in this paper: Augmented Dickey Fuller test and the Phillips Perron test has unit root as the null hypothesis. Augmented Dickey Fuller Test The Dickey Fuller test estimates: [1a] y t = μ a + ρ a y t 1 + u t y t = μ a + γ a y t 1 + u t [1b], where γ a = ρ a 1 and u t ~ iid(0, σ 2 ). Then the one-sided Hypothesis H 0 : γ a = 0, the non-stationarity of y t is tested against the alternative H 1 : γ a < 1. Under the null, the t-values of γ a follow a Dickey-Fuller distribution instead of a standard t-distribution. We will over reject the null if standard critical t-values are used since the tests based on the DFdistribution are smaller tests. If the data contains a deterministic trend in levels, a deterministic trend has to be included in [1b]. Since the critical values of the DF distribution become smaller with each additional deterministic element, adding unjustifiable deterministic parameters results in under-rejection of the null. In this paper, the appropriate deterministic structure of the DF test equations is determined by visual inspection and economic judgement. For example, the deterministic trend is included for the log of real GDP in the DF test equation since economic growth is widely accepted as a normal phenomenon. 4.2 The following is the Augmented Dickey-Fuller (ADF) test equation which minimise the number of lagged first differences subject to freedom of residual autocorrelation: l 1 [2] y t = μ + δt + γy t 1 + j=1 ρ j y t j + u t, where l is the lag length of the model. 4.3 The choice of the appropriate lag-length is important. Too few lags may result in over-rejecting the null when it is true, affecting the size of the test, while too many lags may reduce the power of the test. 4.4 The results from unit root tests need to be taken with caution when the alternative is a very persistent stationary process. Campbell and Perron (1991) show that in finite samples any trend-stationary process can be approximated arbitrarily well by a unit root process, and vice versa. There is a tradeoff between size and power in unit root test (Blough, 1992) and there is a high probability of falsely not rejecting the null when the true DGP is a nearly stationary process. As Harris (1995) argues, the unit root tests are used to assess whether the finite sample data used exhibits stationary or non-stationary attributes. The important question 7 of 31 20 March 2015 12.39 p.m.

is to outline the appropriate inferential procedure rather than to classify time series into the unit root category or not (Cochrane, 1991). Hendry and Juselius (2000) state that even though a variable is stationary, but with unit root close to unity [ ] it is often a good idea to act as if there are unit roots to obtain robust statistical inference. 4.5 The presence of seasonal unit root is tested using HEGY type tests. HEGY tests whether (1 L S ) may be preferred to one of its components and is conducted by estimating the following regression: [2] 4 4 4 k Y t = α + βt + j=2 b j Q jt + i=1 π i W it 1 + l=1 γ l Δ 4 Y t l where Q jt is a seasonal dummy, and the W it are given below. W 1t = (1 + B)(1 + B 2 )Y t W 2t = (1 B)(1 + B 2 )Y t W 3t = (1 B)(1 + B)Y t W 4t = L(1 B)(1 + B)Y t = W 3t 1 4.6 Tests are conducted for the null hypothesis of π 1 = 0, π 2 = 0, and the joint test of π 3 = π 4 = 0. The null hypothesis of unit root at seasonal frequency can be expressed in a unit root at zero frequency and three pairs of complex roots. It is a joint test for long run (or zero frequency) unit roots and seasonal unit roots. With HEGY, one can test for unit roots at some frequencies without assuming that there are unit roots at some or all of the other frequencies. When the seasonal components drift substantially over time, stationarity can be achieved by using the seasonal differencing operator, 4 = (1 B 4 ) in the quarterly case. HEGY seasonal unit root test checks the necessity and the validity of the application of the seasonal differencing operator. VAR 4.7 First, model all the variables in a VAR system: + a t [3] x t = Π 0 + Π 1 x t 1 + + Π k x t k + ε t where x is a vector of non-stationary I(1) variables and Π i are coefficient matrices at different lags. Π 0 is a vector of constant terms and could include other deterministic components such as shift or trend terms. The disturbances are assumed to be white noise. The error correction expression is: [4] Δx t = Π 0 + Γ 1 Δx t 1 + + Γ k 1 Δx t k+1 + Πx t k + ε t where Γ i = I + Π + + Π i ; i = 1, 1, k 1 and Π = I + Π 1 + + Π k 4.8 The rank of the matrix Π is of reduced rank when cointegrated and can be partitioned as Π = αβ. α is the matrix of speed of adjustment coefficients and β is the matrix of cointegrating vectors, i.e., long run relationships. A series of regressions and reduced rank regressions are run to do cointegration testing and to derive the maximum likelihood estimation of β. 8 of 31 20 March 2015 12.39 p.m.

Cointegration 4.9 The stationary linear combinations series between I(1) variables are called cointegration relationships and are interpreted as the long run economic equilibrium relationships. They show how the variables move together in long run equilibrium. They correspond to the hypotheses derived from theoretical considerations. The system is nonstationary but there are r cointegrating relationships among the variables with rank (Π) = r < k. That is, r rows are linearly independent and r linearly independent combinations of the { y it } sequence are stationary. The cointegration relationship is determined by Π = αβ '. The loading matrix α is a (k r) matrix of weights and measures the average speed of convergence towards a long-run equilibrium. β is a (k r) matrix of parameters and are the cointegrating vectors. Johansen Tests 4.10 The Johansen approach does not impose the assumption of a unique cointegrating vector a priori. It efficiently estimates the short-run dynamics simultaneously along with long run relationship, and restrictions to the cointegration space can be imposed and tested. The Johansen (1988) method estimates cointegrating relationships between non-stationary variables using a maximum likelihood procedure. It tests for the number of distinct cointegrating vectors in a multivariate setting. There is an identification problem with cointegration. The Johanson approach only provides information regarding how many vectors span the cointegration space, but requires restrictions motivated by economic theory to ascertain unique vectors. VAR system is a reduced form and the identified long run relations (i.e. the cointegrating vectors) are in a structural form. If the identification is based on economic theory, the cointegrating vectors assume the meaning of long run equilibrium relationships. 4.11 After forming VAR and then VECM as shown in the following section on VECM, the equation [20b] is separated into two regressions: a VAR regressing z t with resultant residuals, R 0t, and the second regressing z t k with resultant residuals, R kt. With no cointegrating relationship, the second regression does not add any explanatory power to the VAR in first differences. Otherwise, with some of the variables cointegrating, the following regression contains a non-zero coefficient matrix: [5] R 0t = ΠR kt + error = αβ R kt + error The product moment matrix from the residuals R kt is: [6] S ij = T 1 T t=1 R it R jt, i, j = 0, k and the first r rows of β are the eigenvectors of S kk. The estimates for α are found for given β with β from the ordinary least squares regression. [7] α (β) = S 0k β(β S kkβ) 1. Then, we solve for β in the following eigenvalue problem: [8] λs kk S k0 S 1 00 S 0k = 0 9 of 31 20 March 2015 12.39 p.m.

4.12 The eigenvalues are put in a decreasing order and the first r rows of the matrix of the normalized eigenvectors corresponding to these eigenvalues are the maximum likelihood estimates for β. The value of the maximised likelihood function subject to the rank r is: [9] L 2/T max = S 00 r i=1 (1 λ i) and the Johansen s test procedure is based on this. 4.13 The null and alternative hypotheses of the trace test are respectively, H 0 : λ r+1 = λ r+2 = = λ p = 0, and H 1 : λ r+1 0. The trace statistic is: p [10] trs(r) = T i=r+1 ln (a λ i). and a sequence of tests starts with a case of r = 0. With the rejection of the null hypothesis, the alternative hypothesis becomes a pre-condition for the next test of all but the largest eigenvalue equalling zero. Upon another rejection of the null, the test goes on for all but the two largest eigenvalues, and so on. We stop the process when we cannot reject the null for the first time and the number of eigenvalues that are not equal to zero is the number of cointegrating vectors in the system. The asymptotic distributions and the critical values are in Johansen (1988, 1991, 1994). Another rank test is the λ max test with the λ max statistic defined as: [11] λ max (r) = Tln(1 λ r+1 ). It is tested whether λ r+1, the next smaller neighbour of λ r, equals zero given that λ r does not. As before, it is first checked if the biggest eigenvalue is significant with r = 0. The process is continued until the first null is not rejected for the first time and the critical values are in Johansen (1988, 1991, 1994). Pesaran and Shin ARDL bounds testing approach The autoregressive distributed lag (ARDL) bounds testing approach to cointegration developed by Pesaran and Shin (1999) has received attention in recent years as a method to address the low power problem of unit root tests. This approach does not require the order of integration of the variables to be known prior to running the test and tests whether the level relationship between a dependent variable and regressors exists where there is an uncertainty as to whether the regressors are trend stationary or first difference stationary. Hence, the pretesting for unit roots can be omitted and the significance of a long run relationship is tested using critical value bounds, which are determined by the two extreme cases: lower bound when all variables are I(0) and an upper bound where all variables are I(1). Hence all possible combinations of orders of integration for the single variables are covered. As first step of the bounds testing approach, the unrestricted error correction model is estimated. Then an F-test on the joint hypothesis that the long run multipliers of the lagged level variables are all equal to zero (no long run relationship) is tested against the alternative hypothesis that at least one long run multiplier is non-zero. 4.14 An equation like the following is considered: 10 of 31 20 March 2015 12.39 p.m.

k l [12] X t = α + i=1 ζ i X t i + j=1 φ j For this case: Y t i + βy t 1 + γx t 1 + ε t j j j [13] Δe t = a 0 + a t t + i=1 b i Δe t i + i=1 d i Δy t i + i=1 f i Δp t i + τ e e t 1 + τ y y t 1 + τ p p t 1 4.15 We test the null hypothesis τ e = τ y = τ p = 0, non-existence of the long run relationship. The calculated F-statistic contains bounds depending on whether the variables are I(0) or I(1). If the null is rejected and there is a long run relationship between e, y, and p, y and p may be regarded as the forcing variables. 4.16 Pesaran and Shin is the bounds test for cointegration which is employed within an ARDL specification and has the advantage of being able to be used in small sample sizes. The bounds test posits the null hypothesis of no cointegration through a joint significance test of the lagged variables and computes Wald or F- statistics. Under the null hypothesis, the asymptotic distribution of the computed F-statistic is non-standard. Two sets of critical F- values, of lower bound and the upper bound are in Pesaran and Shin (1999) for large samples. Narayan (2004) reports the critical F-values for sample sizes ranging 30 to 80. If the calculated F- statistic lies above the upper bound, the null hypothesis of no cointegration is rejected while the null hypothesis cannot be rejected if it lies below the lower bound. If the F-statistic lies between the bounds, the result of the inference is inconclusive. Weak Exogeneity 4.17 A variable is weakly exogeneous to the system if disequilibrium changes (when the cointegrating variables move away from their long run level, i.e., error correction term moves away from zero) do not affect the variable, that is, β z t 1 0. This means that there is no feedback from the disequilibrium back on the level of this variable. A weakly exogenous variable can be considered as given without losing information for inferences according to Engle, Hendry, and Richard (1983). 4.18 Since the variables are found to be weakly exogenous, a partial system analysis involving only the demand equation is sufficient to estimate the long run parameters of the demand curve (i.e., the marginal distribution of the variables do not contain any additional relevant information). If weak exogeneity is found in some of the variables, then a partial VAR system is estimated where electricity demand is modelled conditional on the variables, y, p, and g. Then the cointegrating vectors are estimated with r = 1. Weak exogeneity is tested by checking the restrictions on the loading coefficients. If electricity demand exceeds its long-run counterpart, then there would be negative loading coefficients for RGDP and natural gas and positive loading coefficients for electricity price. This is because the stabilising reaction would be increasing in RGDP and natural gas and decreasing in electricity price. 4.19 Two necessary conditions for weak exogeneity are: (i)the economically interesting long-run coefficients contained in β are determined only by the conditional model, not by the marginal model, and (ii) the 11 of 31 20 March 2015 12.39 p.m.

parameters in the conditional and in the marginal model must not be subject to the same restrictions, which is fulfilled by Gaussian errors. 4.20 Johansen(1992b) decomposes the VECM [20c] into conditional and a marginal models. We decompose the vector z t of I(1) variables into the vector of exogenous variables, x t and the vector of endogenous variables, y t so z t = [y t x t ]. The conditional model with lag length l = 2 is the following VECM: [14] y t = ω x t + Γ y1 ωγ x1 z t 1 + α y ωα x β z t 1 + u yt ωu xt 4.21 The marginal model uses the VECM [20c] in the next section and gives: [15] x t = Γ x1 z t 1 + α x β z t 1 + u xt The new coefficient ω is the auto-covariance matrix of the marginal model and the cross-covariances between the marginal and the conditional model: 1 [16] ω = Ω yx Ω xx 4.22 Estimating the above system is equivalent to estimating [20c] and testing for weak exogeneity consists of checking whether all of loading coefficients in the marginal model is zero (α x,j = 0, j, j = 1,, r). 4.23 The null hypothesis of the test for weak exogeneity is: [17] H 0 : α x = 0. Where λ ı are the r non-zero eigenvalues under the null and λ ı are those of the unrestricted regression. The likelihood-ratio type test statistic is: r j=1 [18] T ln { 1 λ ij 1 λ ij } and is distributed as χ 2 (r p x ). 4.24 Row restrictions are placed in order to test [17] by specifying a matrix A of linear restrictions which reduces α to the matrix α 0 of rows that are non zero under the null. The null hypothesis is then: [19a] H 0 : α = Aα 0 Alternatively, [19b] H 0 : B α = 0 where B is a (p τ) matrix and is orthogonal to A. 4.25 The number of equations in the VECM is reduced by the number of variables found weakly exogenous and the adjustment processes become less complex. Bivariate cointegration methods as proposed by Engle and Granger (1987) are only valid if the explanatory variables are weakly exogenous for all parameters of interest. VECM 4.26 A vector error correction model based on the procedure developed by Johansen (1988, 1991) is used to model the demand for electricity. First, a multivariate vector autoregressive (VAR) model of the levels of the I(1) variables at lag length l is formed: [20a] z t = A 1 z t 1 + A 2 z t 2 + + A l z t l + μ + δt + u t, 12 of 31 20 March 2015 12.39 p.m.

with u t being the (n 1) vector of independently normally distributed errors, μ the (n 1) vector of constant terms, δ the (n 1) coefficient vector of a linear deterministic time trend, and A i the (n n) matrices of coefficients. This model is transformed to the VECM: [20b] z t = Γ 1 z t 1 + + Γ l z t l+1 + Π z t l + μ + δt + u t, where Γ i = (I A 1 A i ), i = 1,2,, (l 1), and Π = (I A 1 A l ), which is further transformed into the VECM: l 1 [20c] z t = i=1 Γ i Δz t i + Πz t 1 + μ + δt + u t, where Γ i = (A i+1 + + A l ), i = 1,2,, (l 1), and Π = (I A 1 A l ) = Π. z is the (T n) matrix of I(1) variables (T being the number of observations, n the number of I(1) variables), Γ i the (n n) vector of short-run coefficients of the first differences of variables. 4.27 For the time being, it is assumed that all variables are endogenous. In order to formulate economic hypotheses, restrictions are placed on Π. The short-run part of the model is of reduced form and the (r 1) vector β z t 1 represents the r cointegration relationships and each equals zero in equilibrium. Z consists of e, y, p, and g in this analysis. The difference between the predicted values and the actual values of e t represents the disequilibrium errors or the error correction term, EC t : [21] EC t = e t β 0 β 1y t β 2p t β 3t 4.28 The term, EC t will be I(0) and included in a VECM equation with the variables e, y, p, and g in first differences. The equation residuals are tested if they are well behaved in terms of normality, serial correlation, and heteroscedasticity. 4.29 The Johansen procedure is employed in this analysis since there are limitations to the single-equation error-correction model (SEECM) approach. First, in case where there is more than one cointegrating vector, estimating a SEECM with x t as the left-hand variable would only capture the first row of the Π matrix, Π 1. For example in case where z is of dimension T 4 and there are two cointegration relationships (r = 2), neither of these can be distinguished from the estimation since Π 1 is a linear combination of both vectors. Π 1 z t 1 is: Π 1 z t 1 = [(α 11 β 11 + α 12 β 12 )(α 11 β 21 + α 12 β 22 )(α 11 β 31 + α 12 β 32 )(α 11 β 41 + α 12 β 42 )][e t 1 y t 1 p t 1 g t 1 ] = α 11 (β 11 e t 1 + β 21 y t 1 + β 31 p t 1 + β 41 g t 1 ) + α 12 (β 12 e t 1 + β 22 y t 1 + β 32 p t 1 + β 42 g t 1 ). 4.30 As can be seen in the above equation, two cointegrating vectors cannot be separated out. Secondly, some of the right-hand side variables may not be weakly exogenous. In running SEECM, there is a loss of information in the determination of variables that are not weakly exogenous, resulting in consistent but inefficient estimates of long run coefficients. This is because the elements of the cointegrating vector β have a higher variance than in the VECM. Despite consistency, there is a bias in cases of small samples. The Johansen procedure directly determines the number of cointegrating vectors. Identifying restrictions 13 of 31 20 March 2015 12.39 p.m.

correspond to the economically meaningful structure imposed and the weak exogeneity of variables can be formally checked. Deterministic Elements 4.31 Five different specifications are considered. First is setting μ = 0 and δ = 0. Model 2 sets μ 0 and δ = 0 with the intercept only appearing in the cointegration space and μ not accounting for linear trend in the data. Decomposing μ into: [22] μ = αμ 1 + α μ 2, then imposing μ 2 = 0. α is the matrix orthogonal to α defining the space of the common stochastic trends and the intercept of the once-differenced data restrictions on parts of the Π matrix. Model 3 lets μ unrestricted (μ 1 0, μ 2 0) allowing for a linear trend in the data with δ = 0. In model 4, δ 0 and μ is unrestricted but limits the deterministic trend to the cointegration space: δ 2 = 0 after decoposition of δ analogous to [22]. In Model 5, all of α 1, α 2, δ 1, δ 2 are allowed to differ from zero and accounts for a quadratic trend in the data. Model 5 is ruled out and Models 1 and 2 are too restrictive. Lag Length 4.32 Optimal leg length is determined based on some information criteria subject to achieving Gaussian residuals. Additional lags improve the fit of the model while parsimonious parameterization leaves larger degrees of freedom. The information criteria solve this trade-off. The goodness of fit is determined by the variance-covariance matrix Ω and over-parameterization is penalized. Over parameterization is worse the smaller the observations are. The Akaike information criterion (AIC) is defined as (Akaike 1973) [23] AIC = ln Ω + 2κ/T The Schwarz information criterion (SC) is defined as (Schwarz 1978) [24] SC = ln Ω + κ(lnt)/t 4.33 The dynamic specification can affect the size and power of the cointegration tests. The lag length is determined through the minimization of information criteria. The Akaike information criterion tends to over-reject the null by resulting in under-parameterized test equations and failing to eliminate significant MA parameters. Schwert s rule on the other hand includes insignificant lags, leading to the efficiency loss, though it obtains lag length close to its nominal value. It is advisable to avoid overloading the test equation with unnecessary parameters especially in small samples since the trade-off between power and size is especially strong. We would like to avoid unnecessary loss in power. The decision mechanism cannot be used when the residuals are auto-correlated, heteroscedastic, or not normally distributed. In such cases, adding one or more additional lags might solve the problem. 14 of 31 20 March 2015 12.39 p.m.

5 Overview of Data 5.1 e is the demand for electricity, y is GDP, p is the price of electricity, g is the natural gas price, w is weather, b is budget share, u is unemployment, n is population. 5.2 Electricity demand is total energy supplied in Gwh. It represents wholesale demand and has flattened since 2008 as shown in figure 1 below. Figure 1. Log of electricity Demand in New Zealand Figure 2. Log of real GDP 15 of 31 20 March 2015 12.39 p.m.

Figure 3. Log of electricity Price 5.3 The sample is from 2000 Q1 through to 2013 Q3. It shows a strong seasonal pattern with one spike per year, in winter. The general increase in the demand for electricity has halted around 2008 and has been replaced by flatter demand since then. 5.4 Retail electricity price is in cents per kilowatts an hour and the data is a QSDEP series from the Ministry of Business, Innovation, and Employment. The data series released in May 2014 goes back to the first quarter of 2004 and includes discounts offered by retailers. This dataset is spliced with the earlier release of QSDEP which goes back to 2000. The price series has shown a steady growth over the sample period with average growth per quarter of 0.6%. Electricity consumption grew by about 9% over the 13 year period while RGDP grew by about 15%. 5.5 GDP series is obtained from Statistics New Zealand. Real GDP grows quarterly on average at about 0.3%. 5.6 Resident population series from Statistics New Zealand grew steadily at about 0.3% per quarter. It grew from 3.8 million to 4.5 million over the sample period. 5.7 Natural gas series is in dollars per gigajoule and is from the Ministry of Business, Innovation, and Employment. Natural gas price has shown a steady increase over the years with the increase flattening at around 2010 while the electricity price continued to rise. 5.8 Temperature data are the nation-wide average temperature reported on NIWA s website. It shows no trend and strong seasonality. 5.9 The budget share of electricity in income reflects the opportunities for fuel use, energy using appliances, heating and cooling needs and the changing demographics and income. The increasing trend in budget share has steadied in around 2010. 16 of 31 20 March 2015 12.39 p.m.

6 The results of empirical analysis Integration 6.1 In the section, the order of integration for each of the variables is determined and if they are integrated of order 1, the usual method of cointegration can be employed to characterise long-run equilibrium relationship between the variables. Table 1 and Table2 summarise the results of unit root tests. Table 1 tests the null hypothesis that the series contains a unit root or is considered non-stationary. Table 2 tests the null hypothesis that the first difference of the series contains a unit root. It reports the augmented Dickey-Fuller (1981) statistics and Phillips- Perron statistics. They are tested in the levels, then first differenced and tested again. Table 1: ADF and PP Test Statistics for Unit Root I(1) e y p g ADF -2.272725-1.316726-0.739516-1.953603 (0.1845) (0.6155) (0.8272) (0.3060) PP -5.876716-1.307485-0.234915-1.953603 (0.0000) (0.6198) (0.9271) (0.3060) Table 2: ADF and PP Test Statistics for Unit Root I(2) e y p g ADF -14.30713-6.847754-7.425698-6.743492 (0.0000) (0.0000) (0.0000) (0.0000) PP -20.80316-6.846126-7.425036-6.756074 (0.0001) (0.0000) (0.0000) (0.0000) 6.2 The null hypothesis of a unit root is not rejected for all the variables in levels except using PP test for electricity demand. The null of unit root is rejected in first differences of all the series. 6.3 There are conflicting results between ADF and PP tests with variable electricity demand. As noted above, the results from unit root test needs to be taken with caution when the alternative is a very persistent stationary process. Hence, we proceed as if all variables contain a unit root, i.e., they are stationary in first differences. 6.4 Due to seasonal patterns observed in the series for electricity demand, seasonal unit root tests are run. HEGY and Franses (1991) type test is run to test for seasonal unit root and the results are summarised in Table 3. 17 of 31 20 March 2015 12.39 p.m.

18 of 31 20 March 2015 12.39 p.m.

Table 3: HEGY test for Seasonal Unit Root Null Hypothesis e π 1 = 0-2.785974 π 2 = 0-2.748078 π 3 = π 4 = 0 3.930263 6.5 This test determines whether the seasonal pattern is constant and is characterized as deterministic or if the seasonal pattern should be characterized as stochastic. Deterministic seasonality can be modelled by including seasonal dummy variables while stochastic seasonal patterns necessitates seasonal differencing. 6.6 In using HEGY type test, first one sided t-test is performed to test. The coefficients are then tested in pairs to test the null hypothesis that π 3 = π 4 = 0. The seasonal unit roots are present when π 3 = π 4 = 0. When the null hypotheses of π 1 = 0 cannot be rejected, it indicates that unit root similar to the ADF and PP tests done above exist. 6.7 The null of seasonal unit root is rejected for all the pairs and multiples of coefficients. The joint test π 3 = π 4 = 0 rejects the null of seasonal unit root. Overall, there are no signs of seasonal unit roots. 6.8 Modelling with seasonal dummy variables is appropriate and I model the seasonality in the data by adding seasonal dummy variables to net out the average change in a variable resulting from any seasonal fluctuations. The dummy variables pick out and control for seasonal variation in the data. Johansen (1995, p.85) suggests using centred (orthogonalised) seasonal dummy variables, which only shift the mean without contributing to the trend. Another point to note from the results of the HEGY test is that the null of π 1 = 0 is rejected for each of the series, indicating no signs of unit root. However, we take the result from the ADF test and still conclude the series are non-stationary of order 1. The VAR System 6.9 A five variable system with sample period ranging from 2000Q1 to 2013Q3 is set up. The variables in the VAR include demand for electricity, price of electricity, real GDP, and price of natural gas. The model includes an intercept and centered seasonal dummies. The lag length is chosen based on the information criteria: AIC and SC. The results of sequential reduction are shown in Table 4. Table 4: Information Criteria for the Sequential Reduction Lag length AIC SC 2-30.78392-28.40470 3-31.11591-28.11399 4-31.13005-27.49367 19 of 31 20 March 2015 12.39 p.m.

5-31.51040-27.22747 6-31.89148-26.94958 6.10 Reducing the VAR to a reasonable length will increase the power of the Johansen procedure. A lag length of 6 is chosen. 6.11 There is no evidence of serial correlation of the residuals from the residual diagnostics. Recursive analysis was performed on the system and it was found to be relatively stable. The results are presented in Table 5 below and in Figures 4 and 5 below. Table 5: Residual Diagnostics: Unrestricted VAR LM Serial Correlation test (Lag 1) LM Serial Correlation test (Lag 2) LM Serial Correlation test (Lag 3) LM Serial Correlation test (Lag 4) LM Serial Correlation test (Lag 5) LM Serial Correlation test (Lag 6) LM Serial Correlation test (Lag 7) LM-stat 16.75976 [0.4013] LM-stat 13.25128 [0.6543] LM-stat 8.36807 [0.9160] LM-stat 23.85830 [0.0926] LM-stat 23.58755 [0.0989] LM-stat 14.96807 [0.5270] LM-stat 8.343398 [0.9380] Normality Test Jarque-Bera 3.542265 [0.8959] Heteroscedasticity Test F-stats 1.069765 [0.4546] 20 of 31 20 March 2015 12.39 p.m.

Figure 4. one-step-ahead Chow test.015.010.000.025.050.075.100.005.000 -.005 -.010 -.015.125.150 III IV I II III IV I II III IV I II III IV I II III 2009 2010 2011 2012 2013 One-Step Probability Recursive Residuals Figure 5. n-step-ahead Chow test.015.010.000.025.050.075.100.005.000 -.005 -.010 -.015.125.150 III IV I II III IV I II III IV I II III IV I II III 2009 2010 2011 2012 2013 N-Step Probability Recursive Residuals The Cointegration Analysis 6.12 Two or more non-stationary variables are cointegrated and share common stochastic trends if they are each integrated of the same order and their linear combination is of a lower order. When the variables are cointegrated, the economic system converges to a long run equilibrium and the variables cannot 21 of 31 20 March 2015 12.39 p.m.

diverge indefinitely from the equilibrium state. They will eventually be re-attracted toward the long run equilibrium since the variables are linked together over time. Stationary linear combinations tell us to which level an otherwise random-walk like variable should converge given certain levels of the other variables in the cointegration space. With the VAR specified in the previous section, we identify possible cointegrating vectors using the Johansen test. Johansen procedure (1988) creates a matrix of eigenvalues, Π and identifies the rank of Π. Using a lag length of six, the results of the test are summarized in Table 6. Table 6: Cointegration Analysis of the Electricity Demand Statistic r = 0 r 1 r 2 r 3 Eigenvalue 0.830914 0.691425 0.288509 0.039181 λ trace 160.0079 74.69533 18.25733 1.918516 p-value 0.0000 0.0000 0.0187 0.1660 Eigenvalue 0.830914 0.691425 0.288509 0.039181 λ max 85.31261 56.43800 16.33881 1.918516 p-value 0.0000 0.0000 0.0232 0.1660 y p g Loading Coefficient -0.000507-0.001224-0.000640 6.13 The null hypotheses are the existence of more than p r = 0, 1, 2, or 3 cointegrating vectors. If the rank is full, i.e., (Π) is p, all the variables are stationary. With rank(π) = 0, there are no cointegrating vectors among the variables. Finally, when rank(π) = r < p, there are r possible cointegrating vectors. Π can be specified as Π = αβ, where α represents the matrix of speed of adjustment or feedback coefficients and β is the matrix of cointegrating vectors or long-run relationships. Both the λ max and λ trace test reject the null hypothesis of zero, 1 and 2. The first null that cannot be rejected at the 5% level is r 3. It must be noted though that the Johansen procedure tends to over-reject the null in small samples. Also, the trace test is more robust than the λ max test with respect to both skewness and excess kurtosis in residuals. Both the λ max and λ trace test statistics indicate that there are three cointegrating vectors. 6.14 The Pesaran and Shin Bounds test statistic is greater than the upper bound value indicating that there is a long run relationship between the variables. The F- statistic for Wald test is 12.52500, exceeding the upper bound of the set of critical values. The t-stats for e(-1) is -3.195018 and the test is inconclusive as to the existence of cointegration. 6.15 Table 7 show individual variable significance and Table 8 shows the results of test for weak exogeneity. 22 of 31 20 March 2015 12.39 p.m.

Table 7: Statistics for Testing the Significance of a Given Variable Variable e y p g χ 2 18.03980 18.22284 17.78892 12.33583 p-value (0.0000) (0.0000) (0.0000) (0.0010) Table 8: Weak Exogeneity Test Variable e y p g χ 2 86.97 0.26 1.49 3.09 p-value < 0.0001 0.6107 0.2225 0.0789 6.16 The test of individual variable significance sets each beta to zero and then the significance can be established. The variables e, p, y, and g appear significant. 6.17 The Weak exogeneity test has the null hypothesis that each of the series does not respond to disturbances or shocks in the cointegration space. Under the null hypothesis, the series is unresponsive to the deviations from the long-run relationships. In identifying weak exogeneity, we test whether a given α indicating a feedback of the cointegrating vector is zero. If a variable is found to be weakly exogenous, we can simplify the model and make inference on the conditional model without the loss of information. 6.18 There is strong evidence of weak exogeneity among all variables except electricity consumption. A joint test of weak exogeneity (α s associated with variables p, y, and g are set to zero) gives likelihood ratio statistics of 4.02. We can reduce the problem into a single equation ecm and we can interpret the cointegrating relation as electricity demand. This means that for electricity demand, the long run relationships in the data are important. 6.19 The simplified model of cointegrating vector is: e = 11.2925 + 1.2302y 0.4061p + 0.6967g 6.20 All coefficients have the expected signs. The positive sign of the y coefficient indicates positive income elasticity of electricity demand. The price has negative elasticity as expected and cross price elasticity of natural gas is positive since the natural gas and electricity are substitutes. The sizes of the estimates of the elasticity are reasonable and are consistent with previous research. Income elasticity is greater than 1which is concerning but it falls within the range of income elasticities documented by Espey and Espey (2004) from their study of 36 peer reviewed research published between 1971 and 2000. They report short-run income elasticities ranging from 0.04 to 3.48 and long-run elasticities of 0.02 to 5.74. Looking at the loading coefficients, they are all negative, indicating that when the electricity demand is above the long run equilibrium level, the negative loading coefficient stabilizes and brings the system back to equilibrium. The loading coefficients for all the variables concerned are quite small, showing signs of weak exogeneity in the variables.the inference based on a single equation will not incur much loss of information. The long run elasticities are; for 23 of 31 20 March 2015 12.39 p.m.

the price of electricity -0.406, for income 1.230, and for the price of natural gas 0.697. The Error Correction model 6.21 The short run relationship among the variables is analysed using an error correction model. It restricts the long run behaviour of the endogenous variables to converge to their cointegrating relationship while allowing a wide range of short run dynamics. The deviation from long run equilibrium is corrected gradually through a series of partial short run adjustments. 6.22 Error correction representation has the advantage of modelling both level and first differences. In error correction models, the dynamics of both short-run (changes) and the long run (levels) are modelled simultaneously. In the error correction model equation, all the variables that appear in the equation are stationary if the variables are cointegrated. The variation of y t depends on the variation of the exogenous variable x t and also to the disequilibrium that is present in the system at time t 1, as captured in the error correction term. Short-run electricity demand is captured by a deviation from the long-run mean, the equilibrium relation in the error correction term and the speed of adjustment gives the responsiveness back to the long run equilibrium. The speed of adjustment parameter α must lie between 0 (no adjustment) and 1 (perfect adjustment in one period). 6.23 We take the original VAR and first difference each variable: [26] 6 6 2 1 e t = α + i=1 β i e t i + i=0 δ i x t i + γecm t 1 + i=0 μ i s t i + φ i w t i η i b t 1 + ε t i=0 + where x t i represents the vector of variables: y, p, and g; ECM t 1 is the equilibrium correction term from the cointegrating vector; w t i is the weather variable and b t 1 represents budget share spent on electricity consumption lagged one period; s t i are centered seasonal dummies. The Table 9 summarises the error correction model results. This is an ECM model with six lags. Table 9: Error Correction Model Coefficient Std.Error t-value D(e(-2)) -0.324488 0.23107-1.40432 D(y(-1)) 1.318014 0.39023 3.37757 D(p(-1)) -0.254530 0.40267-0.63210 Coint eq -0.003614 0.00047-7.68706 n 2.718179 1.38090 1.96841 w -0.014721 0.06512-0.22607 b 0.312111 0.06174 5.05540 24 of 31 20 March 2015 12.39 p.m.