Econometrics Spring School 2016 Econometric Modelling. Lecture 6: Model selection theory and evidence Introduction to Monte Carlo Simulation

Similar documents
Teaching Undergraduate Econometrics

Econometrics 2, Class 1

Semi-automatic Non-linear Model Selection

IIF Economic Forecasting Summer School 2018, Boulder Colorado, USA

Evaluating Automatic Model Selection

DEPARTMENT OF ECONOMICS DISCUSSION PAPER SERIES

Department of Economics. Discussion Paper Series

Monte Carlo Simulations and the PcNaive Software

DEPARTMENT OF ECONOMICS DISCUSSION PAPER SERIES

Statistical model selection with Big Data

DEPARTMENT OF ECONOMICS DISCUSSION PAPER SERIES

Empirical Model Discovery

Model Selection when there are Multiple Breaks

Linear Regression with Time Series Data

Monte Carlo Simulations and PcNaive

Econometric Modelling

Would you have survived the sinking of the Titanic? Felix Pretis (Oxford) Econometrics Oxford University, / 38

Linear Regression with Time Series Data

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

Non-Stationary Time Series, Cointegration, and Spurious Regression

A Practitioner s Guide to Cluster-Robust Inference

The Properties of Automatic Gets Modelling

Problems in model averaging with dummy variables

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test

Introduction to Time Series Analysis of Macroeconomic- and Financial-Data. Lecture 2: Testing & Dependence over time

E 4101/5101 Lecture 9: Non-stationarity

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions

Simple linear regression

New Developments in Automatic General-to-specific Modelling

DEPARTMENT OF ECONOMICS DISCUSSION PAPER SERIES

GARCH Models Estimation and Inference

Forecasting Levels of log Variables in Vector Autoregressions

Statistical Inference with Regression Analysis

Interpreting Regression Results

Computer Automation of General-to-Specific Model Selection Procedures

Dynamic Regression Models (Lect 15)

Evaluating density forecasts: forecast combinations, model mixtures, calibration and sharpness

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure

Econometrics Summary Algebraic and Statistical Preliminaries

Applied Microeconometrics (L5): Panel Data-Basics

ECON Introductory Econometrics. Lecture 16: Instrumental variables

SUPPLEMENTARY TABLES FOR: THE LIKELIHOOD RATIO TEST FOR COINTEGRATION RANKS IN THE I(2) MODEL

ECON 4160, Spring term Lecture 12

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

A better way to bootstrap pairs

Lecture 7: Dynamic panel models 2

Volume 03, Issue 6. Comparison of Panel Cointegration Tests

Introduction to Eco n o m et rics

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation

The regression model with one stochastic regressor.

Testing for Unit Roots with Cointegrated Data

Topic 4: Model Specifications

If we want to analyze experimental or simulated data we might encounter the following tasks:

Hypothesis testing Goodness of fit Multicollinearity Prediction. Applied Statistics. Lecturer: Serena Arima

17 : Markov Chain Monte Carlo

Regression Models with Data-based Indicator Variables

ECON 3150/4150, Spring term Lecture 6

Automated Financial Multi-Path GETS Modelling

Predicting bond returns using the output gap in expansions and recessions

1 Correlation and Inference from Regression

DEPARTMENT OF ECONOMICS DISCUSSION PAPER SERIES

Introduction to Rare Event Simulation

Christopher Dougherty London School of Economics and Political Science

COS513 LECTURE 8 STATISTICAL CONCEPTS

STA 4273H: Statistical Machine Learning

IDENTIFICATION OF TREATMENT EFFECTS WITH SELECTIVE PARTICIPATION IN A RANDOMIZED TRIAL

Joint Estimation of Risk Preferences and Technology: Further Discussion

Econometrics of Panel Data

Ch 2: Simple Linear Regression

The Simplex Method: An Example

MA Advanced Econometrics: Applying Least Squares to Time Series

Lecture 5: Omitted Variables, Dummy Variables and Multicollinearity

Asymptotic theory for cointegration analysis when the cointegration rank is deficient

Discussion of Tests of Equal Predictive Ability with Real-Time Data by T. E. Clark and M.W. McCracken

ECON 312 FINAL PROJECT

Using Matching, Instrumental Variables and Control Functions to Estimate Economic Choice Models

Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem

O i l H ydraulic Damper s

Lectures 5 & 6: Hypothesis Testing

Physics 403. Segev BenZvi. Credible Intervals, Confidence Intervals, and Limits. Department of Physics and Astronomy University of Rochester

Nonstationary Time Series:

EC408 Topics in Applied Econometrics. B Fingleton, Dept of Economics, Strathclyde University

Physics 509: Bootstrap and Robust Parameter Estimation

LECTURE 10: MORE ON RANDOM PROCESSES

ECON 4160, Lecture 11 and 12

Testing Homogeneity Of A Large Data Set By Bootstrapping

Spurious regressions PE The last lecture discussed the idea of non-stationarity and described how many economic time series are non-stationary.

GARCH Models Estimation and Inference

Practice Problems Section Problems

An Introduction to Path Analysis

Economics 326 Methods of Empirical Research in Economics. Lecture 14: Hypothesis testing in the multiple regression model, Part 2

VAR Models and Applications

7. Integrated Processes

Business Statistics. Lecture 10: Course Review

General to Specific Model Selection Procedures for Structural Vector Autoregressions

Testing for a break in persistence under long-range dependencies and mean shifts

Final Exam - Solutions

Economics 471: Econometrics Department of Economics, Finance and Legal Studies University of Alabama

Computational statistics

Transcription:

Econometrics Spring School 2016 Econometric Modelling Jurgen A Doornik, David F. Hendry, and Felix Pretis George-Washington University March 2016 Lecture 6: Model selection theory and evidence Introduction to Monte Carlo Simulation Pretis (Oxford) 6: OxMetrics March 2016 1 / 38

Introduction Economies high dimensional, interdependent, heterogeneous, and evolving: comprehensive specification of all events is impossible. Data generation process (DGP): joint density of all variables in economy. Impossible to accurately theorize about or model precisely: Too high dimensional and far too non-stationary. Need to reduce to manageable size in local DGP (LDGP): the DGP in space of variables under analysis. Models reflect LDGP, not copies: designed to satisfy selection criteria. Knowing LDGP, can generate look alike data which only deviate from actual data by unpredictable noise. Therefore LDGP is the target for model selection. Pretis (Oxford) 6: OxMetrics March 2016 2 / 38

DGP to LDGP DGP The economic mechanism that operates in the real world Theory of Reduction LDGP DGP for locally relevant variables Pretis (Oxford) 6: OxMetrics March 2016 3 / 38

GUM to selected model Explicit model design mimics theory of reduction in practical setting DGP The economic mechanism that operates in the real world Specification of general unrestricted model Automatic Gets algorithm Final terminal model GUM SPECIFIC Congruency i.e. GUM nests LDGP LDGP Theory of Reduction DGP for locally relevant variables Pretis (Oxford) 6: OxMetrics March 2016 4 / 38

Monte Carlo Simulations in OxMetrics Simulations in OxMetrics PCNaive (OxMetrics Module) Programming using Ox focus on PCNaive Pretis (Oxford) 6: OxMetrics March 2016 5 / 38

PCNaive PCNaive Simple to use, intuitive menu based Quick Monte Carlo Simulations Teaching tool Structure on 3 levels: AR(1) experiment (basic) Static experiment (intermediate) Advanced experiment Pretis (Oxford) 6: OxMetrics March 2016 6 / 38

Using PC Naive Using PC Naive 1 AR(1) Experiment for introduction 2 Assessing performance of automatic model selection Pretis (Oxford) 6: OxMetrics March 2016 7 / 38

AR(1) experiment Model - Monte Carlo - AR(1) Experiment Setup: AR(1) DGP, choose coefficient Monte Carlo Replications Live Graphics Generated Data, Histograms Pretis (Oxford) 6: OxMetrics March 2016 8 / 38

1) AR(1) Experiment AR(1) Experiment Overview: Pretis (Oxford) 6: OxMetrics March 2016 9 / 38

Empirical example Start with very simple example: DataSet1.in7 Dataset consists of 20 explanatory variables: Z = (z 1,..., z 20 ) IN 20 [0, I] and 1 dependent variable: y t = β 1 z 1,t + β 2 z 2,t + β 3 z 3,t + β 4 z 4,t + β 5 z 5,t + ɛ t ɛ t IN [0, 1] where T = 100, β 1 =, β 2 = 0.3, β 3 =, β 4 = 0.5, and β 5 = 0.6 which gives E (t β1 ) = ψ 1 = 2, ψ 2 = 3, ψ 3 = 4, ψ 4 = 5, ψ 5 = 6. Pretis (Oxford) 6: OxMetrics March 2016 10 / 38

First Monte Carlo Instead of all using the same dataset, let s generate our own. Open Modeling option, select Monte Carlo, select Advanced experiment. Specify DGP as above. We won t analyse the model so specify any model. Important Advanced Monte Carlo settings all use a different number of replications M i + 2T. The last draw is stored, so this will guarantee that you all have different datasets. Save results Pretis (Oxford) 6: OxMetrics March 2016 11 / 38

Estimate model The general unrestricted model is: N y t = β 0 + β i z i,t + u t i=1 y t = 0.189 1,t + 11 2,t + 0.672 3,t + 0.311 4,t + (0.116) (0.117) (0.115) (0.119) (0.109) 0.683 5,t + 73 6,t + 0.341 7,t + 0.081 8,t + 0.070 9,t + (0.111) (0.12) (0.118) (0.116) (0.125) 0.334 10,t + 0.020 11,t + 0.032 12,t + 0.109 13,t + (0.12) (0.114) (0.107) (0.112) 0.113 14,t + 0.034 15,t + 0.022 16,t + 0.187 17,t + (0.116) (0.106) (0.117) (0.113) 0.165 18,t + 0.132 19,t 08 20,t (0.103) (0.101) (0.127) σ = 1.018; R 2 = 0.592; L = 131.84. Pretis (Oxford) 6: OxMetrics March 2016 12 / 38

Simplifying the model How do we select which variables matter and which do not matter? In this example, variables are orthogonal, therefore 1-cut sufficient. Consider a perfectly orthogonal regression model: y t = N i=1 β iz i,t + ɛ t (1) E[z i,t z j,t ] = λ i,i for i = j & 0 i j, ɛ t IN[0, σ 2 ɛ ] and T >> N. Order the N sample t 2 -statistics testing H 0 : β j = 0: t 2 (N) t2 (N 1) t2 (1) Cut-off m between included and excluded variables is: t 2 (m) c2 α > t 2 (m 1) Larger values retained: all others eliminated. Only one decision needed regardless of size of N Pretis (Oxford) 6: OxMetrics March 2016 13 / 38

1-cut selection At α = 0.05, c α = 1.98. In our example we would retain z 2, z 3, z 4, z 5, z 6, z 7, z 10. Errors: Failed to retain z 1 (a relevant variable enters the LDGP); Mistakenly retained z 6, z 7, z 10 (irrelevant variables do not enter the LDGP). At α = 0.01, c α = 2.62. In our example we would retain z 2, z 3, z 4, z 5, z 7, z 10. How did we do in our Monte Carlo experiment? First analyse retention of irrelevant variables, then consider retaining relevant variables. Pretis (Oxford) 6: OxMetrics March 2016 14 / 38

Selection theory Probabilities of null rejections in t-testing for N irrelevant regressors at significance level α (critical value c α ): event probability retain P ( t i < c α, i = 1,... N) (1 α) N 0 P ( t i c α t j < c α, j i) Nα (1 α) N 1 1... P ( t i < c α t j c α, j i) Nα (N 1) (1 α) N 1 P ( t i c α, i = 1,... N) α N N Average number of null variables retained is: N N! k = i i! (N i)! αi (1 α) N i = Nα. (2) i=0 For N = 40 when α = 0.01 this yields k = Few spurious variables ever retained, yet 2 N possible models, namely 10 12. Pretis (Oxford) 6: OxMetrics March 2016 15 / 38

Selection theory Returning to our example: 16 irrelevant variables: the intercept and z 6,t..., z 20,t. at α = 0.05 we should retain 0.8 of a variable on average retention of irrelevant variables slightly higher at 3 α = 0.01 we should retain 0.16 of a variable on average most of the time irrelevant variables would be eliminated Explanations: Sampling variation one draw from LDGP. 1-cut is only valid for perfectly orthogonal regressors. Although in population regressors are orthogonal, sample is correlated. PcGive Other models Descriptive statistics using PcGive Means, standard deviations and correlations. Largest correlations ±6. Need path search to ensure ordering doesn t matter (return to this shortly). Pretis (Oxford) 6: OxMetrics March 2016 16 / 38

How easy is it to keep variables? Consider the power of a t-test to retain relevant variables. Denote the t-test as t (n, ψ) where n is the degrees of freedom and ψ is the non-centrality parameter, which is 0 under the null. H 0 : β i = 0 To calculate the power to reject the null when E [t] = ψ > 0: P (t c α E [t] = ψ) P (t ψ c α ψ H 0 ). Pretis (Oxford) 6: OxMetrics March 2016 17 / 38

Keeping relevant variables Approximate power if coefficient null only tested once: t-test powers ψ α P ( t c α ) P ( t c α ) 4 1 0.05 0.16 0.001 2 0.05 0.50 0.063 2 0.01 6 0.005 3 0.05 0.85 0.512 3 0.01 0.64 0.168 4 0.05 0.98 0.902 4 0.01 0.91 0.686 6 0.01 1.00 0.997 Low signal-noise variables will rarely be retained using t-tests when the null is tested. Therefore the key problem: retaining relevant variables. Pretis (Oxford) 6: OxMetrics March 2016 18 / 38

Selection theory Returning to our example: 5 relevant variables with non-centralities of 2,3,4,5 and 6 at α = 0.05 the probability of retaining z 1 is 0.51; z 2 is 0.85; z 3 is 0.98; z 4 is 1.00 and z 5 is 1.00. The probability of retaining all 5 variables is 2, to retain 4 it is 9, for 3 it is 0.08, for 2 it is 0.01, to retain 0 or 1 it is 0. at α = 0.01 the probability of retaining z 1 is 3; z 2 is 0.65; z 3 is 0.92; z 4 is 0.99 and z 5 is 1.00. The probability of retaining all 5 variables is 0.14. Example keeps variables with ψ = 3, 4, 5, 6 but does not retain ψ = 2. Pretis (Oxford) 6: OxMetrics March 2016 19 / 38

t-statistic distributions for DGP t-za t-zb 0.3 0.1 0.3 0.1-2 0 2 4 6 t-zc 0 2 4 6 8 10 t-ze -2.5 0.0 2.5 5.0 7.5 t-zd 0.3 0.1 0.0 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 12.5 Pretis (Oxford) 6: OxMetrics March 2016 20 / 38

t-statistic distributions for GUM t-za t-zb 0.3 0.1 t-zc 0.3 0.1 t-zd 0.3 0.1 0 5 t-ze 0 5 t-zf 0 5 10 t-zg 0 5 10 t-zh 0 5 10 t-zi t-zj t-zk t-zl t-zm t-zn t-zo t-zp t-zq t-zr t-zs t-zt Pretis (Oxford) 6: OxMetrics March 2016 21 / 38

Unconditional t-statistic distributions for selected model t-za t-zb t-zc 3 1.0 0.3 1 0.5 0.1 0 5 0 5 10 0 5 10 t-ze t-zf t-zg 15 15 0.3 t-zd 0.3 0.1 0 5 10 t-zh 15 0.1 5 5 5 0 5 10 t-zi 15 t-zj 15 t-zk 15 t-zl 15 5 5 5 5 t-zm 15 t-zn 15 t-zo 15 t-zp 15 5 5 5 5 t-zq 15 t-zr 15 t-zs 15 t-zt 15 5 5 5 5 Pretis (Oxford) 6: OxMetrics March 2016 22 / 38

Two costs of selection Two costs of selection: costs of inference, and costs of search First inevitable if tests of non-zero size and non-unit power, even if commence from data generation process (DGP). Costs of search additional to initial model being the DGP. p dgp α,i : probability of retaining ith variable in DGP at size α. 1 p dgp α,i is cost of inference (prob. of discarding relevant). M relevant, m M retained. p gum α,i : probability of retaining i th variable in GUM. K irrelevant variables, k K retained. Search costs are ( M i=1 p dgp α,i pgum α,i ) + ( K j=1 ) p gum α,j. Pretis (Oxford) 6: OxMetrics March 2016 23 / 38

Costs of search and inference Return to example and estimate the DGP: y t = 44 (0.114) z 1,t + 0.384 (0.107) z 2,t + 0.556 (0.118) z 3,t + 0.323 (0.11) z 4,t + 0.563 (0.108) z 5,t σ = 1.086; L = 147.6. (3) Using 1-cut rule at α = 0.05, all relevant variables would be retained. Using 1-cut rule at α = 0.01, z 1,t would not be retained: a cost of inference. Compare to 1-cut rule commencing from GUM. Search costs are: relevant variables that are retained in DGP but not in GUM z 1 at 5%, none at 1%. retained irrelevant variables in GUM z 6, z 7, z 10 at 5%; z 7, z 10 at 1%. Simulations using Autometrics shows search costs can be small relative to costs of inference. Pretis (Oxford) 6: OxMetrics March 2016 24 / 38

Unconditional t-statistic distributions for selected DGP t-za t-zb 3 1.0 2 1 0.5-2 0 2 4 6 t-zc 0 2 4 6 8 t-zd 0.3 0.3 0.1 0.1 0 2 4 6 8 10 t-ze 0.0 2.5 5.0 7.5 10.0 0.3 0.1 0.0 2.5 5.0 7.5 10.0 12.5 Pretis (Oxford) 6: OxMetrics March 2016 25 / 38

Selection versus fitting the DGP t-za 3.0 selecting from DGP in Red 2.5 2.0 1.5 1.0 0.5 not selecting from DGP in Green -2-1 0 1 2 3 4 5 6 7 Pretis (Oxford) 6: OxMetrics March 2016 26 / 38

Selection from DGP or GUM t-za 3.0 selecting from GUM in Green 2.5 2.0 1.5 selecting from DGP in Blue 1.0 0.5-3 -2-1 0 1 2 3 4 5 6 7 8 Pretis (Oxford) 6: OxMetrics March 2016 27 / 38

Model search How to find the source of Nile? Every path is explored; North, South, East and West, till success Gets does this for model selection. Search all reduction paths in general model Path search gives impression of repeated testing. Confused with selecting from 2 N possible models Our example just 21 variables, 2 21 = 2097152 possible models. More realistic examples where N > 100: searching all possible models an impossible task. We are selecting variables, not models, & only N variables. But selection matters, as only retain significant outcomes. Sampling variation also entails retain irrelevant, or miss relevant, by chance near selection margin. Pretis (Oxford) 6: OxMetrics March 2016 28 / 38

Repeated testing Does repeated testing distort selection? 1 Severe illness: more tests increase probability of correct diagnosis. 2 Mis-specification tests: if r independent tests τ j conducted under null for small significance level η (critical value c η ): P( τ j < c η j = 1,..., r) = (1 η) r 1 rη. More tests increase probability of false rejection Suggests significance level η of 1% or tighter. 3 Repeated diagnostic tests: probabilities unaltered. Conclude: no generic answer. Pretis (Oxford) 6: OxMetrics March 2016 29 / 38

Apply Autometrics to example Specify GUM and select (e.g. at α = 0.05): y t = 69 (0.114) + 0.391 (0.104) z 2,t + 0.585 (0.116) z 3,t + 0.322 (0.108) z 4,t +0.573 (0.106) z 5,t + 0.311 (0.106) z 7,t + 75 (0.11) z 10,t (4) σ = 1.051; R 2 = 88; L = 143.24. F ar = 2.411; F arch = 0.130; F hetero = 0.974; F heterox = 0.970; F reset = 0.858; χ 2 norm = 0.551 Compare across all results... Diagnostic checking Encompassing Pretis (Oxford) 6: OxMetrics March 2016 30 / 38

Bias corrections Selection matters: only retain significant variables. Can correct final estimates for selection. Convenient approximation that: = β β [ ] β N, 1 = N [ψ, 1] t β σ β σ β σ β when non-centrality of t-test is ψ = β σ β Using Gaussian approximation: 1 φ (w) = exp ( 12 ) w2 2π Φ (w) = 1 w 2π exp ( 12 ) x2 dx Pretis (Oxford) 6: OxMetrics March 2016 31 / 38

Truncation correction Doubly-truncated distribution expected truncated t-value is: [ ] E t β t β > c α ; ψ = ψ (5) so observed t -value is unbiased estimator for ψ. Thus, observe ψ when true non-centrality is ψ. Sample selection induces: ψ = ψ + φ(c α ψ) φ( c α ψ) 1 Φ(c α ψ) + Φ( c α ψ) = ψ + r (ψ, c α) (6) As know mapping ψ ψ, can correct by inversion : ψ = ψ r (ψ, c α ), albeit iteratively as r depends on ψ. Applies as well to correcting β once ψ is known: for β 0: ] ( E [ β β σ β c α = β 1 + r (ψ, c ) ( ) α) ψ = β ψ ψ (7) Pretis (Oxford) 6: OxMetrics March 2016 32 / 38

Estimating the bias correction Estimate ψ from t β then iteratively solve for ψ from (6): ψ = ψ r (ψ, c α ) (8) so replace r(ψ, c α ) in (8) by r(t β, c α ), and ψ by t β: ) ψ = r t β (t β, c α, then ( ) ψ = r t β ψ, cα (9) leading to the bias-corrected parameter estimate: ( ψ/t β) β = β. (10) from inverting (7). Pretis (Oxford) 6: OxMetrics March 2016 33 / 38

Implementing bias correction Bias corrects closely, not exactly, for relevant: over-corrects for some t-values (Hendry and Krolzig, 2005). No impact on bias of parameters of irrelevant variables as their β i = 0, so unbiased with or without selection. Some increase in MSEs of relevant variables Correction exacerbates downward bias in unconditional estimates of relevant coefficients & increases MSEs slightly. But remarkable decrease in MSEs of irrelevant variables. First free lunch of new approach: obvious why in retrospect most correction for t near c α which holds for retained irrelevant variables. Bias corrections applied to orthogonal variables: two highly correlated regressors each have barely significant coefficients so large adjustment to both and hence their joint effect, if orthogonalized, only one would be adjusted. Pretis (Oxford) 6: OxMetrics March 2016 34 / 38

2-step corrected (red) cond. dist n. 7.5 5.0 t=2 4 t=3 4 t=4 2.5 2 2-1 0 1 t=6 4 3 2 1-1 0 1 t=8 4 3 2 1-1 0 1 t=0 5.0 2.5-1 0 1 t=0 6 4 2-1 0 1 Ya 7.5 t 1 with t=0 5.0 2.5-1 0 1 Constant with t=0 5.0 2.5-1 0 1-1 0 1-1 0 1 Pretis (Oxford) 6: OxMetrics March 2016 35 / 38

Bias correction on 1-cut MSEs Against such costs, bias correction considerably reduces MSEs of coefficients of retained irrelevant variables: benefits both unconditional and conditional distributions. Despite selecting from a very large set of potential variables: nearly unbiased estimates of coefficients and equation standard errors can be obtained; little loss of efficiency from testing irrelevant variables, some loss from not retaining relevant variables at large values of c α ; huge gain by not commencing from an under-specified model. Normal distribution has thin tails, so power loss from tighter significance levels rarely substantial, but could be for fat-tailed error processes at tighter α. Pretis (Oxford) 6: OxMetrics March 2016 36 / 38

Bias corrected coefficients in example Bias correction code is not automated in OxMetrics but a simple Ox code can be applied. Open: BiasCorrectionCode.ox Paste in coefficient estimates and t-statistics. Choose significance level. Run to obtain 2-step corrected estimates: y t = 0.162 + 0.380z 2,t + 0.585z 3,t + 78z 4,t +0.573z 5,t + 64z 7,t + 0.188z 10,t (11) σ = 1.051; R 2 = 88; L = 143.24. Pretis (Oxford) 6: OxMetrics March 2016 37 / 38

References I Hendry, D. F. and H.-M. Krolzig (2005). The properties of automatic Gets modelling. Economic Journal 115, C32 C61. Pretis (Oxford) 6: OxMetrics March 2016 38 / 38