Near-equivalence in Forecasting Accuracy of Linear Dimension Reduction Method
|
|
- Aron Briggs
- 5 years ago
- Views:
Transcription
1 Near-equivalence in Forecasting Accuracy of Linear Dimension Reduction Methods in Large Macro-Panels Efstathia 1 (joint with Alessandro Barbarino 2 ) 1 Applied Statistics, Institute of Statistics and Mathematical Methods in Economics, Vienna University of Technology 2 Federal Reserve Board October 6, 2017
2 Outline 1 Overview of Empirical Results Out-of-Sample Forecasting Exercise Result 1: Empirical Forecast Accuracy Near-Equivalence Result 1: Empirical Forecast Accuracy Near-Equivalence Result 2: Dispersion of Information in the Panel Result 3: Targeting and Parsimony 2 Common Forecasting Framework Forecasting with Many Predictors Common Forecasting Framework Rationalizing Empirical Results 3 Beyond Linear Signals: Sufficient Dimension Reduction SDR Forecasting Framework 4 Conclusions
3 Overview of Empirical Results
4 The Empirical Forecasting Exercise McCracken and Ng (2015) created the FRED-MD macro-panel: 123 monthly macro variables. From January 1960 to 2015, balanced panel of 110 predictors with approx. 660 data. Focus on: CPI inflation, non-farm payrolls, unemployment rate, labor force participation, average hourly earnings in goods-producing industries, industrial production, real personal consumption expenditures and real personal income. Use popular statistical learning methods OLS, PCR, RIDGE, PLS and SIR (Sliced Inverse Regression). Out-of-sample forecasts using recursive window at horizons h = 1, 3, 6, 12. Compare results in classic horse-race against AR(4).
5 DFM Forecasting Dynamic Factor Models (DFMs) are the mainstream methodology in Econometrics for both measuring comovement and forecasting time series. The modeling comprises of two unconnected steps: The factors that replace x t are ordered according to how much variance in X they explain; i.e., in relevance to X and not to Y.
6 Best Estimators Before the Great Recession Table: Recursive Out-of-Sample Window from 1992 to 2007, MSFE Relative to AR4 h=1 h=3 h=6 h=12 Target Estimator MSFE Estimator MSFE Estimator MSFE Estimator MSFE INDPRO RIDGE b RIDGE b R70SIRb c,d R8SIRb c,d PAYEMS R30SIRb c,d R30SIRb c,d R30SIRb c,d R30SIRb c,d UNRATE PC PLS RIDGE b RIDGE b CLF16O RIDGE b PC PC PC CPIAUC PC.BS h SIRb d PC.ON f R8SIRb c,d CES060 PLS PLS-1 1 R8SIRb c,d AR4 1 DPCER AR4 1 AR4 1 AR4 1 AR4 1 RPI R8SIRb c,d R8SIRb c,d R8SIRb c,d RFSIRb c,d
7 Best Estimators In the Recovery Table: Recursive Out-of-Sample Window from 2010 to 2016, MSFE Relative to AR4 h=1 h=3 h=6 h=12 Target Estimator MSFE Estimator MSFE Estimator MSFE Estimator MSFE INDPRO RIDGE b RIDGE b RIDGE b R30SIRb c,d PAYEMS RIDGE b RIDGE b RIDGE b RIDGE b UNRATE PCF.BS i PLS SIRa d PC.BS h CLF16O PC PCF.BS i RIDGE b PLS CPIAUC PC.BS h SIRb d R70SIRb c,d PC CES060 SIRa d SIRb d R8SIRb c,d R8SIRa c,d DPCER PC PC PC PC RPI R8SIRa c,d RFSIRa c,d PLS PLS
8 Best Estimators in Full Sample Table: Recursive Out-of-Sample Window from 1992 to 2016, MSFE Relative to AR4 h=1 h=3 h=6 h=12 Target Estimator MSFE Estimator MSFE Estimator MSFE Estimator MSFE INDPRO PC PC SIRb d R70SIRb c,d PAYEMS R8SIRb c,d R8SIRb c,d R8SIRb c,d R30SIRb c,d UNRATE PC PC PC PC CLF16O PC PC PC PC CPIAUC PLS RFSIRb c,d SIRb d R8SIRb c,d CES060 AR4 1 SIRb d R8SIRb c,d PC DPCER PC.ON f PC RFSIRb c,d R8SIRb c,d RPI PLS PLS PLS PLS-1 0.8
9 Model Deninition b The number after RIDGE Comp. is value of regularization param. c R#SIR is regularized SIR. # refers to number of leading PCs used for regularization. d In type a SIR target is y t+h. In type b SIR target is y t. e RFSIR is regularized SIR on most frequently selected PCs by out-of-sample best subset selection. f PC.ON is PCR in which number of components is chosen using Onatski criterion. g PC.ICP1 is PCR in which number of components is chosen using Bai-Ng criterion. h PC.BS is PCR in which number of components is chosen by best subset selection. i PCF.BS is PCR in which number of components is chosen by best subset selection applied on most frequently chosen PCs in out-of-sample.
10 Near-Equivalence in Forecast Accuracy Great variability of best estimator depending on (target-sample-horizon) triplet. Sometimes beating AR(4) remains a challenging task. SIR type estimators do fairly well, PLS is the estimator represented the least in the tables. Competitive edge of SIR is its parsimony: attains practically the same forecasting accuracy using one or two linear combinations of the predictors No Clear Winner result ubiquitous in macro-forecasting literature. Can we explain the results?
11 Selection of PCs Using Different Criteria Figure: Av. Num. of PC Comp. Selected by BSS versus ICps Over
12 MSFE As A Function of Number of Components
13 Common Forecasting Framework
14 Forecasting with Many Predictors Forecasting target variable y with a large set of predictors x R p. Starting forecasting model includes all x predictors: where y t+h = α 0 + α 1x t + α 2w t + ε t+h w t may contain additional regressors such as lags of y t E(ε t+h ) = 0 α 1 x t and α 2 w t are uncorrelated with ε t+h Naturally conducive to OLS to estimate α 1 and α 2 and the feasible forecast is ŷ t+h = ˆα 0 + α 1x t + α 2w t
15 Dimension Reduction Methods OLS estimation can be problematic when p is large relative to T, or variables in x t are nearly collinear, as in macro forecasting = dimension reduction. Dimension reduction methods in regression fall into two categories: Variable Selection: subset of the original predictors is selected for modeling the response (e.g. Stepwise Regression, All Subset Selection, LASSO) = Some predictors are discarded Feature Extraction: linear combinations of the regressors replace the original regressors, therefore reduction of p happens prior to model fitting (e.g. PCR) = All predictors are retained.
16 Dimension Reduction Methods OLS estimation can be problematic when p is large relative to T, or variables in x t are nearly collinear, as in macro forecasting = dimension reduction. Dimension reduction methods in regression fall into two categories: Variable Selection: subset of the original predictors is selected for modeling the response (e.g. Stepwise Regression, All Subset Selection, LASSO) = Some predictors are discarded Feature Extraction: linear combinations of the regressors replace the original regressors, therefore reduction of p happens prior to model fitting (e.g. PCR) = All predictors are retained.
17 Forecasting with Extracted Linear Features Assuming y t+h = α x t + ε t+h Forecasting with extracted features is implemented in two steps: (i) Extract reduced features f t = β x t (ii) Fit the reduced model y t+h = γ f t + ε t+h = (βγ) x t + ε t+h
18 Justifying Reduced Model Proposition Suppose x is a random p-vector with finite first two moments. Assume y = α x + ε, with E(ε) = 0 and Cov(ε, x) = 0 where α R p is unknown, and f = β x, where the p r matrix β is such that (i) for each α, E(α x β x = f) is linear in f R r (LC); (ii) for each α, Var(α x β x = f) is constant in f R r (CVC). Then, y can be decomposed into the sum of a linear function of f and a remainder or error term, as follows, y = µ y + γ (f E(f)) + ɛ where γ = (β Σ x β) β Σ x α R r, µ y = E(y), E(ɛ f) = 0 and var(ɛ f) is constant.
19 Discussion of Proposition It assumes the DGP for target is: y t+h = α x t + ε t+h It places assumptions on marginal distribution of observable predictors No assumptions of underlying latent factor structure plus assumptions on unobservable quantities. It concludes that the reduced forecasting model y t+h = γ f t + ε t+h is equivalent to the DGP, with f t the linear extracted features. We can remove (CVC) and lose homoskedasticity in regression error.
20 Do (LC) and (CVC) Hold in Practice? Conditions (LC) and (CVC) are difficult to verify in practice since β is unknown. However: Conditions (LC) and (CVC) are satisfied for any β if x is multivariate normal. Joint normality on FRED-MD is rejected. Condition (LC) is satisfied for all β if x has an elliptically contoured distribution (Eaton (1986)) FRED-MD satisfies ellipticity tests. Therefore with (LC) satisfied, we have first piece of the puzzle why different feature extraction methods have similar forecast accuracy.
21 Feature Extraction Feature extraction is carried out by solving: where max Corr 2 (h(y), x β i ) H[Var(x β i )] {β i } {β i β i =1} {β i Σx β j =0}i 1 j=1 OLS: H( ) = 1, h( ) = identity PCR: H( ) = identity, h( ) = 1 PLS: H( ) = identity, h( ) = identity RIDGE: H( ) = Var(x β i ) Var(x β i )+κ, h( ) = identity SIR: H( ) = identity, h( ) = E[β i x y]
22 Feature Extraction Feature extraction is carried out by solving: where max Corr 2 (h(y), x β i ) H[Var(x β i )] {β i } {β i β i =1} {β i Σx β j =0}i 1 j=1 OLS: H( ) = 1, h( ) = identity PCR: H( ) = identity, h( ) = 1 PLS: H( ) = identity, h( ) = identity RIDGE: H( ) = Var(x β i ) Var(x β i )+κ, h( ) = identity SIR: H( ) = identity, h( ) = E[β i x y]
23 Feature Extraction Feature extraction is carried out by solving: where max Corr 2 (h(y), x β i ) H[Var(x β i )] {β i } {β i β i =1} {β i Σx β j =0}i 1 j=1 OLS: H( ) = 1, h( ) = identity PCR: H( ) = identity, h( ) = 1 PLS: H( ) = identity, h( ) = identity RIDGE: H( ) = Var(x β i ) Var(x β i )+κ, h( ) = identity SIR: H( ) = identity, h( ) = E[β i x y]
24 Feature Extraction Feature extraction is carried out by solving: where max Corr 2 (h(y), x β i ) H[Var(x β i )] {β i } {β i β i =1} {β i Σx β j =0}i 1 j=1 OLS: H( ) = 1, h( ) = identity PCR: H( ) = identity, h( ) = 1 PLS: H( ) = identity, h( ) = identity RIDGE: H( ) = Var(x β i ) Var(x β i )+κ, h( ) = identity SIR: H( ) = identity, h( ) = E[β ix y]
25 Feature Extraction Feature extraction is carried out by solving: where max Corr 2 (h(y), x β i ) H[Var(x β i )] {β i } {β i β i =1} {β i Σx β j =0}i 1 j=1 OLS: H( ) = 1, h( ) = identity PCR: H( ) = identity, h( ) = 1 PLS: H( ) = identity, h( ) = identity RIDGE: H( ) = Var(x β i ) Var(x β i )+κ, h( ) = identity SIR: H( ) = identity, h( ) = E[β ix y]
26 Prediction Estimate reduced predictors: f t = β x t Fit the reduced model: y t+h = γ ( β x t ) + ε t+h Prediction coefficient: b = β γ Prediction is: ŷ t+h t = b x 0 = γ β x0 Common pattern for b across feature extraction methods: b = scaling factor signal.
27 Table: Summary of Estimators for y = α x + ɛ Metaparameter Scaling Factor Signal OLS 1 Σ x σ xy RIDGE κ (Σ x + κi) 1 σ xy PCR m Σ x (m) σ xy PLS s W PLS(s) (W PLS (s)σxwpls(s)) 1 W PLS (s) σxy SIR d W SIR(d) (W SIR (d)σxwsir(d)) 1 W SIR (d) σxy RSIR m, d W RSIR(d) (W RSIR (d)σx(m)wrsir(d)) 1 W RSIR (d) σxy
28 These estimators have Same signal (σ x,y ): near-equivalent forecast accuracy Different scaling factor: dimension
29 Focus on Targeted Estimators For targeted estimators PLS and SIR scaling factor uses y. How they differ: PLS: WPLS (s) = ( σ xy, Σ x σ xy,..., Σ x s 1 σ xy ) SIR: W SIR (d) = Therefore the reduction is based: ( ) E(x y), Σ x E(x y),..., Σ d 1 x E(x y) For PLS on a moment of the joint distribution F (y, x). For SIR on a moment of the conditional distribution F (y x).
30 Focus on Targeted Estimators For targeted estimators PLS and SIR scaling factor uses y. How they differ: PLS: WPLS (s) = ( σ xy, Σ x σ xy,..., Σ x s 1 σ xy ) SIR: W SIR (d) = Therefore the reduction is based: ( ) E(x y), Σ x E(x y),..., Σ d 1 x E(x y) For PLS on a moment of the joint distribution F (y, x). For SIR on a moment of the conditional distribution F (y x).
31 Focus on Targeted Estimators For targeted estimators PLS and SIR scaling factor uses y. How they differ: PLS: WPLS (s) = ( σ xy, Σ x σ xy,..., Σ x s 1 σ xy ) SIR: W SIR (d) = Therefore the reduction is based: ( ) E(x y), Σ x E(x y),..., Σ d 1 x E(x y) For PLS on a moment of the joint distribution F (y, x). For SIR on a moment of the conditional distribution F (y x).
32 Focus on Targeted Estimators For targeted estimators PLS and SIR scaling factor uses y. How they differ: PLS: WPLS (s) = ( σ xy, Σ x σ xy,..., Σ x s 1 σ xy ) SIR: W SIR (d) = Therefore the reduction is based: ( ) E(x y), Σ x E(x y),..., Σ d 1 x E(x y) For PLS on a moment of the joint distribution F (y, x). For SIR on a moment of the conditional distribution F (y x).
33 Why is SIR more effective? var(x) = var[e(x Y )] + E[var(X Y )] For simplicity, assume Y is categorical: X Y is the restriction of X in the class defined by Y Signal: var[e(x Y )] is between group variation in X Noise: E[var(X Y )] is within group variation PCR mixes up noise and signal when extracting PCs PLS produces ordering of eigen-components according to their importance to cov(x, Y ), i.e. captures linear dependence of X and Y SIR produces ordering of eigen-components according to their importance to Y, linear and non-linear
34 Why is SIR more effective? var(x) = var[e(x Y )] + E[var(X Y )] For simplicity, assume Y is categorical: X Y is the restriction of X in the class defined by Y Signal: var[e(x Y )] is between group variation in X Noise: E[var(X Y )] is within group variation PCR mixes up noise and signal when extracting PCs PLS produces ordering of eigen-components according to their importance to cov(x, Y ), i.e. captures linear dependence of X and Y SIR produces ordering of eigen-components according to their importance to Y, linear and non-linear
35 Why is SIR more effective? var(x) = var[e(x Y )] + E[var(X Y )] For simplicity, assume Y is categorical: X Y is the restriction of X in the class defined by Y Signal: var[e(x Y )] is between group variation in X Noise: E[var(X Y )] is within group variation PCR mixes up noise and signal when extracting PCs PLS produces ordering of eigen-components according to their importance to cov(x, Y ), i.e. captures linear dependence of X and Y SIR produces ordering of eigen-components according to their importance to Y, linear and non-linear
36 Why is SIR more effective? var(x) = var[e(x Y )] + E[var(X Y )] For simplicity, assume Y is categorical: X Y is the restriction of X in the class defined by Y Signal: var[e(x Y )] is between group variation in X Noise: E[var(X Y )] is within group variation PCR mixes up noise and signal when extracting PCs PLS produces ordering of eigen-components according to their importance to cov(x, Y ), i.e. captures linear dependence of X and Y SIR produces ordering of eigen-components according to their importance to Y, linear and non-linear
37 Why is SIR more effective? var(x) = var[e(x Y )] + E[var(X Y )] For simplicity, assume Y is categorical: X Y is the restriction of X in the class defined by Y Signal: var[e(x Y )] is between group variation in X Noise: E[var(X Y )] is within group variation PCR mixes up noise and signal when extracting PCs PLS produces ordering of eigen-components according to their importance to cov(x, Y ), i.e. captures linear dependence of X and Y SIR produces ordering of eigen-components according to their importance to Y, linear and non-linear
38 Why is SIR more effective? var(x) = var[e(x Y )] + E[var(X Y )] For simplicity, assume Y is categorical: X Y is the restriction of X in the class defined by Y Signal: var[e(x Y )] is between group variation in X Noise: E[var(X Y )] is within group variation PCR mixes up noise and signal when extracting PCs PLS produces ordering of eigen-components according to their importance to cov(x, Y ), i.e. captures linear dependence of X and Y SIR produces ordering of eigen-components according to their importance to Y, linear and non-linear
39 Beyond Linear Signals: Sufficient Dimension Reduction
40 Sufficient Reductions Suppose R(x) : R p R d with d p. R(x) is a Sufficient Reduction when no information about y is lost when x is replaced by R(x), i.e. F (y x) = F (y R(x)) How to use in forecasting: For example, if the reduction is linear, the forecasting model is R(x) = β x y t+h = g(β x, ε t+h )
41 Sufficient Reductions Suppose R(x) : R p R d with d p. R(x) is a Sufficient Reduction when no information about y is lost when x is replaced by R(x), i.e. F (y x) = F (y R(x)) How to use in forecasting: For example, if the reduction is linear, the forecasting model is R(x) = β x y t+h = g(β x, ε t+h )
42 Sufficient Reductions Suppose R(x) : R p R d with d p. R(x) is a Sufficient Reduction when no information about y is lost when x is replaced by R(x), i.e. F (y x) = F (y R(x)) How to use in forecasting: For example, if the reduction is linear, the forecasting model is R(x) = β x y t+h = g(β x, ε t+h )
43 Identifying Sufficient Linear Reductions Linear SDR allows identification and estimation of C(β) with F (Y X) = F (Y β T X) General Idea: find a kernel matrix M so that span(m) span(α) SDR methods: different proposals for M
44 For example: In SIR If E(x β x) is linear in β x for any β (LC) such that F (y x) = F (y β x), then C ( Σ 1 x [E(x y) E(x)] ) = C (Var[E(x y)]) C(β) In SIR, M = Σ 1 x Var(E(X Y ))
45 Forecasting with Sliced Inverse Regression SIR Reduces the predictors: (i) Computes non-parametric estimate of Var(E(x y)) (ii) Sets β d (SIR) to be the first d eigenvectors of 1 Σ x Var(E(x y)) Run non-parametric regression y t+h = g( β d (SIR) x t, ε t+h ) and predict. We did not find any forecasting improvement with non-linear g( ). RSIR formed by substituting x with PCs of Σ x.
46 Conclusions
47 Comparing Forecasting Frameworks Forecasting Framework DGP for y t+h DGP for x t DFM Standard y t+h = γ f t + ε t+h x t = Γf t + u t With Irrelevant Factors y t+h = γ f 1,t + ε t+h x t = Γ 1f 1,t + Γ 2f 2,t + u t No Factors for Target y t+h = α x t + ε t+h x t = Γf t + u t Other No Factors y t+h = α x t + ε t+h Conditions on Σ x and β SDR Non-linear Forecast y t+h = g(α x t, ε t+h ) E [x t α x t] = Aβ x t Linear Forecast y t+h = γ (α x t) + ε t+h E [x t α x t] = Aβ x t Most forecasting literature studies shrinkage under factor structure. We study shrinkage estimators under different DGP for observables.
48 Comparing Forecasting Frameworks Forecasting Framework DGP for y t+h DGP for x t DFM Standard y t+h = γ f t + ε t+h x t = Γf t + u t With Irrelevant Factors y t+h = γ f 1,t + ε t+h x t = Γ 1f 1,t + Γ 2f 2,t + u t No Factors for Target y t+h = α x t + ε t+h x t = Γf t + u t Other No Factors y t+h = α x t + ε t+h Conditions on Σ x and β SDR Non-linear Forecast y t+h = g(α x t, ε t+h ) E [x t α x t] = Aβ x t Linear Forecast y t+h = γ (α x t) + ε t+h E [x t α x t] = Aβ x t Most forecasting literature studies shrinkage under factor structure. We study shrinkage estimators under different DGP for observables.
49 Data Specific Conclusions Since there is no non-linear trend in E(y x) and x is elliptically contoured, Proposition 1 yields that the reduced model y t+h = γ (β x t ) + ɛ t+h is a good approximation model to the true population model. What does this mean for prediction?
50 E(y µ y β x) = α Σ x β(β Σ x β) β (x E(x)) = α P β(σ x )(x E(x)) where P β(σx ) = β(β Σ x β) β Σ x is the projection operator onto span(β) relative to (a, b) = a Σ x b. For a given x value, how close E(y x) will be to the truth is reflected by the norm of I P β(σx ), which is controlled solely by β. In consequence, ordering estimators β x with respect to their forecasting accuracy is tantamount to identifying β s with smaller norm.
51 Can we improve the forecasting ability of the model? Forecasting accuracy results do not support non-linearities in the conditional mean of the target variable If there is non-linear signal, it must be in the variance Dimension two or higher in SIR indicates the presence of nonlinear relationships Plots of the residuals of the fitted forecasting model versus the second SIR component indicate that the variance of the forecasting models varies for most targets
52 Can we improve the forecasting ability of the model? The data passed the multivariate elliptical symmetry test but not the normality test (LC) holds but not (NCV) Model-based SDR: ( and Forzani (2015)) When X Y EC p(µ Y,, g Y ), then R(X) = (α (X E(X)), (X E(X)) Σ 1 x (X E(X)) The minimal sufficient reduction has a non-linear component Need to model the variance using (X E(X)) Σ 1 x (X E(X))
53 References Bai J. and Ng, S. (2008). Forecasting economic time series using targeted predictors. Journal of Econometrics, 146(2), Barbarino, A. and, E. (2015). Forecasting with sufficient dimension reductions. Finance and Economics Discussion Series Washington: Board of Governors of the Federal Reserve System, E. and Cook, R. D. (2001). Extending SIR: The Weighted Chi-Square Test, Journal of the American Statistical Association 96, , E. and Yang, J. (2011). Dimension Estimation in Sufficient Dimension Reduction: A Unifying Approach. Journal of Multivariate Analysis, 102, , E. and Forzani, J. (2014). Sufficient reductions in regressions with elliptically contoured inverse predictors. Journal of the American Statistical Association 110, Forni-Hallin-Lippi-Reichlin2005 Forni, M., Hallin, M., Lippi, M. and Reichlin, L. (2005), The Generalized Dynamic Factor Model: One-Sided Estimation and Forecasting, Journal of the American Statistical Association 100(471),
54 Li, K. C. (1991). Sliced inverse regression for dimension reduction (with discussion). Journal of the American Statistical Association 86, McCracken, M.W. and Ng, S. (2015), FRED-MD: A Monthly Database for Macroeconomic Research, Working Paper A, June Steinberger, L. and Leeb, H. (2015), On conditional moments of high-dimensional random vectors given lower-dimensional projections, Stock, J. H. and Watson, M. 2002a. Macroeconomic forecasting using diffusion indices. Journal of Business and Economics Statistics, 20, Stock, J. H. and Watson, M. 2002b. Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association, 97, Stock, J. H. and Watson, M. (2006), Macroeconomic Forecasting Using Many Predictors, Handbook of Economic Forecasting, Graham Elliott, Clive Granger, Allan Timmerman (eds.), North Holland.
Bayesian Compressed Vector Autoregressions
Bayesian Compressed Vector Autoregressions Gary Koop a, Dimitris Korobilis b, and Davide Pettenuzzo c a University of Strathclyde b University of Glasgow c Brandeis University 9th ECB Workshop on Forecasting
More informationDiscussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis
Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,
More informationForecast comparison of principal component regression and principal covariate regression
Forecast comparison of principal component regression and principal covariate regression Christiaan Heij, Patrick J.F. Groenen, Dick J. van Dijk Econometric Institute, Erasmus University Rotterdam Econometric
More informationNowcasting Norwegian GDP
Nowcasting Norwegian GDP Knut Are Aastveit and Tørres Trovik May 13, 2007 Introduction Motivation The last decades of advances in information technology has made it possible to access a huge amount of
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53
State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State
More informationMining Big Data Using Parsimonious Factor and Shrinkage Methods
Mining Big Data Using Parsimonious Factor and Shrinkage Methods Hyun Hak Kim 1 and Norman Swanson 2 1 Bank of Korea and 2 Rutgers University ECB Workshop on using Big Data for Forecasting and Statistics
More informationUsing all observations when forecasting under structural breaks
Using all observations when forecasting under structural breaks Stanislav Anatolyev New Economic School Victor Kitov Moscow State University December 2007 Abstract We extend the idea of the trade-off window
More informationForecasting Macroeconomic Variables
Cluster-Based Regularized Sliced Inverse Regression for Forecasting Macroeconomic Variables arxiv:1110.6135v2 [stat.ap] 2 Dec 2013 Yue Yu 1, Zhihong Chen 2, and Jie Yang 3 1 TradeLink L.L.C., Chicago,
More informationA Selective Review of Sufficient Dimension Reduction
A Selective Review of Sufficient Dimension Reduction Lexin Li Department of Statistics North Carolina State University Lexin Li (NCSU) Sufficient Dimension Reduction 1 / 19 Outline 1 General Framework
More informationVector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.
Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series
More informationEconometric Forecasting
Graham Elliott Econometric Forecasting Course Description We will review the theory of econometric forecasting with a view to understanding current research and methods. By econometric forecasting we mean
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49
State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing
More informationSufficient Dimension Reduction for Longitudinally Measured Predictors
Sufficient Dimension Reduction for Longitudinally Measured Predictors Ruth Pfeiffer National Cancer Institute, NIH, HHS joint work with Efstathia Bura and Wei Wang TU Wien and GWU University JSM Vancouver
More informationTime series forecasting by principal covariate regression
Time series forecasting by principal covariate regression Christiaan Heij, Patrick J.F. Groenen, Dick J. van Dijk Econometric Institute, Erasmus University Rotterdam Econometric Institute Report EI2006-37
More informationDynamic Factor Models Cointegration and Error Correction Mechanisms
Dynamic Factor Models Cointegration and Error Correction Mechanisms Matteo Barigozzi Marco Lippi Matteo Luciani LSE EIEF ECARES Conference in memory of Carlo Giannini Pavia 25 Marzo 2014 This talk Statement
More informationOn the Selection of Common Factors for Macroeconomic Forecasting. Alessandro Giovannelli and Tommaso Proietti. CREATES Research Paper
On the Selection of Common Factors for Macroeconomic Forecasting Alessandro Giovannelli and Tommaso Proietti CREATES Research Paper 2014-46 Department of Economics and Business Aarhus University Fuglesangs
More informationProblem Set #6: OLS. Economics 835: Econometrics. Fall 2012
Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.
More informationForecasting the term structure interest rate of government bond yields
Forecasting the term structure interest rate of government bond yields Bachelor Thesis Econometrics & Operational Research Joost van Esch (419617) Erasmus School of Economics, Erasmus University Rotterdam
More informationApplied Econometrics (MSc.) Lecture 3 Instrumental Variables
Applied Econometrics (MSc.) Lecture 3 Instrumental Variables Estimation - Theory Department of Economics University of Gothenburg December 4, 2014 1/28 Why IV estimation? So far, in OLS, we assumed independence.
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More information9) Time series econometrics
30C00200 Econometrics 9) Time series econometrics Timo Kuosmanen Professor Management Science http://nomepre.net/index.php/timokuosmanen 1 Macroeconomic data: GDP Inflation rate Examples of time series
More informationA Comparison of Business Cycle Regime Nowcasting Performance between Real-time and Revised Data. By Arabinda Basistha (West Virginia University)
A Comparison of Business Cycle Regime Nowcasting Performance between Real-time and Revised Data By Arabinda Basistha (West Virginia University) This version: 2.7.8 Markov-switching models used for nowcasting
More informationLeast angle regression for time series forecasting with many predictors. Sarah Gelper & Christophe Croux Faculty of Business and Economics K.U.
Least angle regression for time series forecasting with many predictors Sarah Gelper & Christophe Croux Faculty of Business and Economics K.U.Leuven I ve got all these variables, but I don t know which
More informationECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models
ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models Ragnar Nymoen Department of Economics University of Oslo 25 September 2018 The reference to this lecture is: Chapter
More informationForecasting the unemployment rate when the forecast loss function is asymmetric. Jing Tian
Forecasting the unemployment rate when the forecast loss function is asymmetric Jing Tian This version: 27 May 2009 Abstract This paper studies forecasts when the forecast loss function is asymmetric,
More informationEcon 423 Lecture Notes: Additional Topics in Time Series 1
Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes
More informationEcon 510 B. Brown Spring 2014 Final Exam Answers
Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity
More informationBayesian Compressed Vector Autoregressions
Bayesian Compressed Vector Autoregressions Gary Koop University of Strathclyde Dimitris Korobilis University of Glasgow December 1, 15 Davide Pettenuzzo Brandeis University Abstract Macroeconomists are
More informationUniversity of Konstanz Department of Economics
University of Konstanz Department of Economics Forecasting Aggregates with Disaggregate Variables: Does Boosting Help to Select the Most Relevant Predictors? Jing Zeng Working Paper Series 2014-20 http://www.wiwi.uni-konstanz.de/econdoc/working-paper-series/
More informationEstimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions
Estimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions James Morley 1 Benjamin Wong 2 1 University of Sydney 2 Reserve Bank of New Zealand The view do not necessarily represent
More informationNonlinear Forecasting With Many Predictors Using Kernel Ridge Regression
Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression Peter Exterkate a, Patrick J.F. Groenen b Christiaan Heij b Dick van Dijk b a CREATES, Aarhus University, Denmark b Econometric
More informationFinal Overview. Introduction to ML. Marek Petrik 4/25/2017
Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,
More information1. Shocks. This version: February 15, Nr. 1
1. Shocks This version: February 15, 2006 Nr. 1 1.3. Factor models What if there are more shocks than variables in the VAR? What if there are only a few underlying shocks, explaining most of fluctuations?
More informationLinear Model Selection and Regularization
Linear Model Selection and Regularization Recall the linear model Y = β 0 + β 1 X 1 + + β p X p + ɛ. In the lectures that follow, we consider some approaches for extending the linear model framework. In
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationEconometric Forecasting
Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 1, 2014 Outline Introduction Model-free extrapolation Univariate time-series models Trend
More informationIntroduction to Simple Linear Regression
Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department
More informationVariable Selection in Predictive Regressions
Variable Selection in Predictive Regressions Alessandro Stringhi Advanced Financial Econometrics III Winter/Spring 2018 Overview This chapter considers linear models for explaining a scalar variable when
More information10. Time series regression and forecasting
10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the
More informationEconometría 2: Análisis de series de Tiempo
Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector
More informationLinear regression methods
Linear regression methods Most of our intuition about statistical methods stem from linear regression. For observations i = 1,..., n, the model is Y i = p X ij β j + ε i, j=1 where Y i is the response
More informationECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course
ECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course Ragnar Nymoen 13 January 2014. Exercise set to seminar 1 (week 6, 3-7 Feb) This
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More information7. Integrated Processes
7. Integrated Processes Up to now: Analysis of stationary processes (stationary ARMA(p, q) processes) Problem: Many economic time series exhibit non-stationary patterns over time 226 Example: We consider
More informationAugmenting our AR(4) Model of Inflation. The Autoregressive Distributed Lag (ADL) Model
Augmenting our AR(4) Model of Inflation Adding lagged unemployment to our model of inflationary change, we get: Inf t =1.28 (0.31) Inf t 1 (0.39) Inf t 2 +(0.09) Inf t 3 (0.53) (0.09) (0.09) (0.08) (0.08)
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 16, 2013 Outline Introduction Simple
More informationDay 4: Shrinkage Estimators
Day 4: Shrinkage Estimators Kenneth Benoit Data Mining and Statistical Learning March 9, 2015 n versus p (aka k) Classical regression framework: n > p. Without this inequality, the OLS coefficients have
More informationEconometrics I Lecture 3: The Simple Linear Regression Model
Econometrics I Lecture 3: The Simple Linear Regression Model Mohammad Vesal Graduate School of Management and Economics Sharif University of Technology 44716 Fall 1397 1 / 32 Outline Introduction Estimating
More informationSufficient Forecasting Using Factor Models
Sufficient Forecasting Using Factor Models arxiv:1505.07414v2 [math.st] 24 Dec 2015 Jianqing Fan, Lingzhou Xue and Jiawei Yao Princeton University and Pennsylvania State University Abstract We consider
More informationWORKING PAPER SERIES COMPARING ALTERNATIVE PREDICTORS BASED ON LARGE-PANEL FACTOR MODELS NO 680 / OCTOBER 2006
WORKING PAPER SERIES NO 680 / OCTOBER 2006 COMPARING ALTERNATIVE PREDICTORS BASED ON LARGE-PANEL FACTOR MODELS by Antonello D'Agostino and Domenico Giannone WORKING PAPER SERIES NO 680 / OCTOBER 2006 COMPARING
More information7. Integrated Processes
7. Integrated Processes Up to now: Analysis of stationary processes (stationary ARMA(p, q) processes) Problem: Many economic time series exhibit non-stationary patterns over time 226 Example: We consider
More informationAdvanced Econometrics
Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 16, 2013 Outline Univariate
More informationEconomic modelling and forecasting
Economic modelling and forecasting 2-6 February 2015 Bank of England he generalised method of moments Ole Rummel Adviser, CCBS at the Bank of England ole.rummel@bankofengland.co.uk Outline Classical estimation
More informationHypothesis testing Goodness of fit Multicollinearity Prediction. Applied Statistics. Lecturer: Serena Arima
Applied Statistics Lecturer: Serena Arima Hypothesis testing for the linear model Under the Gauss-Markov assumptions and the normality of the error terms, we saw that β N(β, σ 2 (X X ) 1 ) and hence s
More information1. The OLS Estimator. 1.1 Population model and notation
1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology
More informationRegression: Ordinary Least Squares
Regression: Ordinary Least Squares Mark Hendricks Autumn 2017 FINM Intro: Regression Outline Regression OLS Mathematics Linear Projection Hendricks, Autumn 2017 FINM Intro: Regression: Lecture 2/32 Regression
More informationProgram. The. provide the. coefficientss. (b) References. y Watson. probability (1991), "A. Stock. Arouba, Diebold conditions" based on monthly
Macroeconomic Forecasting Topics October 6 th to 10 th, 2014 Banco Central de Venezuela Caracas, Venezuela Program Professor: Pablo Lavado The aim of this course is to provide the basis for short term
More informationMaking sense of Econometrics: Basics
Making sense of Econometrics: Basics Lecture 7: Multicollinearity Egypt Scholars Economic Society November 22, 2014 Assignment & feedback Multicollinearity enter classroom at room name c28efb78 http://b.socrative.com/login/student/
More informationPrinciples of forecasting
2.5 Forecasting Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables X t (m 1 vector). Let y t+1
More informationData Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395
Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection
More informationUniversity of Pretoria Department of Economics Working Paper Series
University of Pretoria Department of Economics Working Paper Series Predicting Stock Returns and Volatility Using Consumption-Aggregate Wealth Ratios: A Nonlinear Approach Stelios Bekiros IPAG Business
More informationECON 3150/4150, Spring term Lecture 6
ECON 3150/4150, Spring term 2013. Lecture 6 Review of theoretical statistics for econometric modelling (II) Ragnar Nymoen University of Oslo 31 January 2013 1 / 25 References to Lecture 3 and 6 Lecture
More informationBusiness Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'
Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where
More informationEconometrics. 9) Heteroscedasticity and autocorrelation
30C00200 Econometrics 9) Heteroscedasticity and autocorrelation Timo Kuosmanen Professor, Ph.D. http://nomepre.net/index.php/timokuosmanen Today s topics Heteroscedasticity Possible causes Testing for
More informationForecasting Macroeconomic Variables using Collapsed Dynamic Factor Analysis
TI 2012-042/4 Tinbergen Institute Discussion Paper Forecasting Macroeconomic Variables using Collapsed Dynamic Factor Analysis Falk Brauning Siem Jan Koopman Faculty of Economics and Business Administration,
More informationThe Multiple Regression Model Estimation
Lesson 5 The Multiple Regression Model Estimation Pilar González and Susan Orbe Dpt Applied Econometrics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model:
More informationThe aggregation problem in its hystorical perspective: a summary overview
The aggregation problem in its hystorical perspective: a summary overview WYE CITY GROUP Global Conference on Agricultural and Rural Household Statistics By GIANCARLO LUTERO Presented by EDOARDO PIZZOLI
More informationMay 2, Why do nonlinear models provide poor. macroeconomic forecasts? Graham Elliott (UCSD) Gray Calhoun (Iowa State) Motivating Problem
(UCSD) Gray with May 2, 2012 The with (a) Typical comments about future of forecasting imply a large role for. (b) Many studies show a limited role in providing forecasts from. (c) Reviews of forecasting
More information1 Introduction to Generalized Least Squares
ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the
More informationLab 07 Introduction to Econometrics
Lab 07 Introduction to Econometrics Learning outcomes for this lab: Introduce the different typologies of data and the econometric models that can be used Understand the rationale behind econometrics Understand
More information18.440: Lecture 26 Conditional expectation
18.440: Lecture 26 Conditional expectation Scott Sheffield MIT 1 Outline Conditional probability distributions Conditional expectation Interpretation and examples 2 Outline Conditional probability distributions
More informationDo Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods
Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Robert V. Breunig Centre for Economic Policy Research, Research School of Social Sciences and School of
More information2.5 Forecasting and Impulse Response Functions
2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables
More informationLecture 5: Omitted Variables, Dummy Variables and Multicollinearity
Lecture 5: Omitted Variables, Dummy Variables and Multicollinearity R.G. Pierse 1 Omitted Variables Suppose that the true model is Y i β 1 + β X i + β 3 X 3i + u i, i 1,, n (1.1) where β 3 0 but that the
More informationPanel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63
1 / 63 Panel Data Models Chapter 5 Financial Econometrics Michael Hauser WS17/18 2 / 63 Content Data structures: Times series, cross sectional, panel data, pooled data Static linear panel data models:
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationWarwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014
Warwick Business School Forecasting System Summary Ana Galvao, Anthony Garratt and James Mitchell November, 21 The main objective of the Warwick Business School Forecasting System is to provide competitive
More informationFinal Exam. Economics 835: Econometrics. Fall 2010
Final Exam Economics 835: Econometrics Fall 2010 Please answer the question I ask - no more and no less - and remember that the correct answer is often short and simple. 1 Some short questions a) For each
More informationDepartment of Economics
Department of Economics A Review of Forecasting Techniques for Large Data Sets Jana Eklund and George Kapetanios Working Paper No. 625 March 2008 ISSN 1473-0278 A Review of Forecasting Techniques for Large
More informationApplied Microeconometrics (L5): Panel Data-Basics
Applied Microeconometrics (L5): Panel Data-Basics Nicholas Giannakopoulos University of Patras Department of Economics ngias@upatras.gr November 10, 2015 Nicholas Giannakopoulos (UPatras) MSc Applied Economics
More informationA Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models
Journal of Finance and Investment Analysis, vol.1, no.1, 2012, 55-67 ISSN: 2241-0988 (print version), 2241-0996 (online) International Scientific Press, 2012 A Non-Parametric Approach of Heteroskedasticity
More informationShort-term forecasts of GDP from dynamic factor models
Short-term forecasts of GDP from dynamic factor models Gerhard Rünstler gerhard.ruenstler@wifo.ac.at Austrian Institute for Economic Research November 16, 2011 1 Introduction Forecasting GDP from large
More informationNowcasting and Short-Term Forecasting of Russia GDP
Nowcasting and Short-Term Forecasting of Russia GDP Elena Deryugina Alexey Ponomarenko Aleksey Porshakov Andrey Sinyakov Bank of Russia 12 th ESCB Emerging Markets Workshop, Saariselka December 11, 2014
More informationLinear Dimensionality Reduction
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis
More informationRegression, Ridge Regression, Lasso
Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.
More informationMixed frequency models with MA components
Mixed frequency models with MA components Claudia Foroni a Massimiliano Marcellino b Dalibor Stevanović c a Deutsche Bundesbank b Bocconi University, IGIER and CEPR c Université du Québec à Montréal September
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationFactor models. March 13, 2017
Factor models March 13, 2017 Factor Models Macro economists have a peculiar data situation: Many data series, but usually short samples How can we utilize all this information without running into degrees
More informationVAR-based Granger-causality Test in the Presence of Instabilities
VAR-based Granger-causality Test in the Presence of Instabilities Barbara Rossi ICREA Professor at University of Pompeu Fabra Barcelona Graduate School of Economics, and CREI Barcelona, Spain. barbara.rossi@upf.edu
More informationIdentification and Estimation Using Heteroscedasticity Without Instruments: The Binary Endogenous Regressor Case
Identification and Estimation Using Heteroscedasticity Without Instruments: The Binary Endogenous Regressor Case Arthur Lewbel Boston College Original December 2016, revised July 2017 Abstract Lewbel (2012)
More informationNonstationary time series models
13 November, 2009 Goals Trends in economic data. Alternative models of time series trends: deterministic trend, and stochastic trend. Comparison of deterministic and stochastic trend models The statistical
More informationLecture 6: Dynamic Models
Lecture 6: Dynamic Models R.G. Pierse 1 Introduction Up until now we have maintained the assumption that X values are fixed in repeated sampling (A4) In this lecture we look at dynamic models, where the
More informationA bottom-up approach for forecasting GDP in a data rich environment. António Rua. Banco de Portugal. July 2016
A bottom-up approach for forecasting GDP in a data rich environment Francisco Dias Banco de Portugal António Rua Banco de Portugal Maximiano Pinheiro Banco de Portugal July 2016 Abstract In an increasingly
More informationForecasting. Bernt Arne Ødegaard. 16 August 2018
Forecasting Bernt Arne Ødegaard 6 August 208 Contents Forecasting. Choice of forecasting model - theory................2 Choice of forecasting model - common practice......... 2.3 In sample testing of
More informationWooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares
Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares Many economic models involve endogeneity: that is, a theoretical relationship does not fit
More informationMachine Learning for OR & FE
Machine Learning for OR & FE Regression II: Regularization and Shrinkage Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com
More informationIntroduction to Economic Time Series
Econometrics II Introduction to Economic Time Series Morten Nyboe Tabor Learning Goals 1 Give an account for the important differences between (independent) cross-sectional data and time series data. 2
More informationAutoregressive distributed lag models
Introduction In economics, most cases we want to model relationships between variables, and often simultaneously. That means we need to move from univariate time series to multivariate. We do it in two
More informationSemester 2, 2015/2016
ECN 3202 APPLIED ECONOMETRICS 2. Simple linear regression B Mr. Sydney Armstrong Lecturer 1 The University of Guyana 1 Semester 2, 2015/2016 PREDICTION The true value of y when x takes some particular
More information