Bagging Nonparametric and Semiparametric Forecasts with Constraints

Size: px
Start display at page:

Download "Bagging Nonparametric and Semiparametric Forecasts with Constraints"

Transcription

1 Bagging Nonparametric and Semiparametric Forecasts with Constraints Tae-Hwy Lee Department of Economics University of California, Riverside Aman Ullah Department of Economics University of California, Riverside July 21, 2010 Yundong Tu Department of Economics University of California, Riverside Abstract Valuable economic information plays a more and more important role in the recent literature of economic modeling and forecasting. This paper considers Nonparametric and Semiparametric regression models with the use of bagging to impose economic constraints derived from economic theory, as an alternative approach to Hall and Huang (2001. Asymptotic properties of our proposed estimators and forecasts produced with these estimators are established. Monte Carlo simulations are conducted to show their finite sample performance. An application to predicting U.S. equity premium is taken to illustrate our proposed approach for imposing economic constraints in the framework of nonparametric and semiparametric forecasts. Key Words: Economic Constraints; Nonparametric Models; Semiparametric Models; Bagging; Equity Premium. JEL Classification: C14; C50; C53; G17. Department of Economics, University of California, Riverside, CA 92521, taelee@ucr.edu, phone: ( Department of Economics, University of California, Riverside, CA 92521, yundong.tu@ .ucr.edu Corresponding author. Department of Economics, University of California, Riverside, CA 92521, aman.ullah@ucr.edu, Tel.:

2 Contents 1 Introduction 1 2 Estimation with Constraints Parametric Estimation with Constraints Nonparametric Estimation with Constraints Nonparametric Estimation with Constraints: Hall and Huang ( Nonparametric Estimation with Constraints: Bagging Semi-parametric Estimation with Constraints Nonparametric and Semiparametric Forecasts with Constraints Nonparametric Forecasts with Constraints Semiparametric Forecasts with Constraints Sampling Properties of Constrained Estimators Constrained Parametric Estimator Constrained Nonparametric Estimator Constrained Semiparametric Estimator Sampling Properties of Bagged Constrained Estimators Bagged Constrained Parametric Estimator Bagged Constrained Nonparametric Estimator Bagged Constrained Semiparametric Estimator Simulation 17 6 Application: Predicting the Equity Premium Forecast Framework Data Description Results Conclusion 28 2

3 1 Introduction Linear models are frequently used for economic predictions in the forecasting literature, for example, Stock and Watson (1999, 2003, Goyal and Welch (2008, among others. They are prevalent for their simplicity to implement, computational efficiency, easy interpretation of the coefficients and straightforwardness to impose prior known restrictions. A recent paper by Campbell and Thompson (2008 considered to apply sign restriction to the linear forecasting equation of stock returns. The sign restriction was taken as a means to alleviate parameter uncertainty and thus reconcile the contradicting in-sample and out-of-sample performance of predictors. They showed that once simple and sensible restrictions on the signs of coefficients are imposed, many predictors perform out-of-sample better than the historical average return forecast. This work is followed by Hillebrand, Lee and Medeiros (2009, who incorporate the bagging approach proposed by Gordon and Hall (2008 as a means to sign restrictions in the forecast models. They show that bagging sign restriction approach has a more predictive power than the simple sign restriction adopted by Campbell and Thompson (2008. Compared to nonparametric methods, linear models are known for its disadvantages of possible model misspecification. Although several of the mentioned papers adopted factor models approach, the possible misspecification in the linear framework could seriously undermine the forecasts compared to those produced via nonparametric models. Nonparametric forecasting methods did not receive welldeserved attention in the literature. Nevertheless, a recent paper by Chen and Hong (2009 found that, in the prediction of asset returns, nonparametric kernel model has a better forecasting power than historical mean, due to the lower signal-to-noise ratio resulted from nonparametric model. However, Chen and Hong (2009 did not consider constraints implied by the underlying economic theory in the forecasting exercise. Nonparametric kernel estimation with constraints has a long history that dates back to the work of Brunk (1955. Recent work on imposing monotonicity on nonparametric regression function include Hall and Huang (2001, Dette, Neumeyer and Pliz (2006 and Chernozhukov, Fenandez-Val and Galichon (2007, among others. Hall and Huang (2001 proposed a novel method regarding imposing monotonicity constraints on a quite general class of kernel estimations 1. Their estimator is constructed by introducing probability weights for each response data point, which controls the impact of each observation on the estimated regression function. Their method is rooted in a conventional kernel framework and has appeal that it keeps similar smoothness of the estimator while not requiring extra computational time. This work was extended by Racine, Parameter and Du (2009 to allow for a broad class of conventional constraints and tests for these constraints are also provided. The contribution of this paper is four folds. First, we generalized the linear fore- 1 This class includes Nadaraya-Watson estimator (Nadaraya, 1965; Watson, 1964, Priestley- Chao estimator (Priestley and Chao, 1972, Gasser-Muller estimator (Gasser and Muller, 1979 and local polynomial estimator (Fan, 1992, etc. 1

4 cast model considered in Goyal and Welch (2008, Campbell and Thompson (2008 and Hillebrand, Lee and Medeiros (2009 to its nonparametric counterpart. We conjecture that the linear model being adopted is subject to model misspecification and unable to capture the nonlinear dynamics in the underlying economic structure. Nonparametric approach without functional form restriction will improve the predictability of the predictors, as to be demonstrated in our annual equity premium forecast exercise. Second, we impose economic restrictions to our nonparametric forecast. This is to make the prediction more accurate and efficient in the sense that we employ more information than otherwise done in Chen and Hong (2009. As Chen and Hong (2009 found that nonparametric forecast improves over historical mean, we expect our restricted nonparametric forecast to perform even better than the historical mean, thus confirms the predictability of stock return as in Campbell and Thompson (2008, where a restricted linear model is used. Third, we bootstrap aggregate the restricted nonparametric forecast to further reduce the uncertainty of forecasting functional form. It has been shown in a number of studies that bagging can reduce mean squared error of forecasts significantly, in linear forecast framework. See, for example, Inoue and Killian (2008 or Lee and Yang (2006. We will show that this result also applies to the nonparametric forecast with constraints and provide simulation results as well. Fourth, the proposed forecast models are further applied in the prediction of equity premium. Using the same data as in Campbell and Thompson (2008 and Hillebrand, Lee and Medeiros (2009, we demonstrate that bagging restricted nonparametric models outperform other models that have been discussed in the literature. The rest of the paper is organized as follows. Section 2 presents nonparametric methods under constraints to produce forecasts. Section 3 presents our bagging forecasts using the nonparametric forecasts formulated in Section 2. Section 4 establishes our finite and asymptotic properties of the proposed estimators. Section 5 conducts Monte Carlo simulations to compare our proposed bagging nonparametric forecasts with constraints with other forecasts, including linear parametric forecasts with constraints and nonparametric forecasts with constraints, etc. We evaluate different forecasting schemes considered in this paper via the prediction of equity premium in Section 6 and we conclude in Section 7 with remarks. 2 Estimation with Constraints Many economic models try to establish a relationship between a variable of interest (dependent variable, y t, and a group of control variables collected in a vector X t. In the framework of economic forecast, we are interested to know what the expected h-step ahead consumption y t+h would be given that our current income is x. Put it formally, we are establishing the relationship defined through g T,h (x := E(y T +h X T = x. 2

5 More than often, some information is also implied from economic theory. For example, Friedman s consumption theory states that Marginal Propensity to Consume (MPC is to be inbetween 0 and 1. While it is appealing to testing Friedman s theory with data available as a conventional practice, it is more attractive to a practitioner to make use of the implications from economic theory to obtain a model that is consistent with prior economic observations. To this end, the estimation of the expected value is subject to constraints derived from economic theory, for example, 0 < MPC < 1, or concavity of production technology. In this paper, we focus on the slope of the curve that relates y and x. Constraints of other types like curvature are left for future research. 2.1 Parametric Estimation with Constraints Consider first a parametric (linear model with a single regressor x: g T,h (x := α + βx Goyal and Welch (2008 used the unconstrained OLS estimators in the prediction of stock returns. Note that the OLS estimators α and β maintain the bond, α = ȳ T β x T (1 where ȳ T = 1 T T t=1 y t and x T = 1 T T t=1 x t. If a positive slope constraint is implied from the underlying theory, one can estimate β through an indicator function like in Campbell and Thompson (2008, β : = max{ β, 0} = 1( β > 0 ( ( β ᾱ = 1 β > 0 α + 1 β 0 ȳ T. An observation deserving your attention is that relationship between ᾱ and β remains as in (1, that is 2, ᾱ = ȳ T β x T (2 Although the constraint imposed through an indicator function is quite easy to implement in practice, it is well-known that this function is non-smooth and can introduces significant bias and variance. Gordon and Hall (2008 extended the work of Bühlmann and Yu (2002 and proposed a bagged constrained estimator, ˆβ := E β = 1 J J β (j = 1 J j=1 J j=1 { } max β (j, 0, 2 ( ( ᾱ = 1 β > 0 α + 1 β 0 ( β > 0 (ȳ T β x T = 1 = 1 ȳ T ( + 1 β 0 ( ( β > 0 ȳ T 1 β > 0 β xt + 1 ȳ T ( β 0 ȳ T = ȳ T 1( β > 0 β x T 3

6 where β { } (j = max β (j, 0 and β (j is estimator of β from the j-th bootstrap sample. It is illustrated in Bühlmann and Yu (2002 that this bagged constrained estimator is enjoying a smaller Asymptotic Mean Squared Error (AMSE, though incurring a larger asymptotic bias. 2.2 Nonparametric Estimation with Constraints Despite its simplicity to implement in practice, parametric linear model like y t = α + βx t + u t may be subject to misspecification because E(u t x t 0 due to possible neglected nonlinearity. This is to be avoided via a nonparametric regression, y t = m (x t + u t, where m (x t = E (y t x t and u t = y t E (y t x t. Kernel estimators of m (x t such as Nadaraya-Watson or local linear estimators are common practice in the nonparametric literature. Yet, in the face of information derived from economic theory, can we impose constraints (e.g., monotonicity, positivity on the nonparametric kernel regression models? Hall and Huang (2001 proposed a re-weighted kernel method to impose constraints on a general class of kernel estimators, which is followed by Racine, Parmeter and Du (2009 and Henderson and Parmeter (2009. Alternatively, we propose to use bagging to impose constraints in nonparametric kernel regression models Nonparametric Estimation with Constraints: Hall and Huang (2001 Considered a general class of kernel estimator g( written as weighted average of y s ĝ T,h (x = 1 T h A t (xy t+h. (3 T h t=1 Hall and Huang (2001 suggested an estimator g T,h (x; p = T h t=1 p t A t (xy t+h, (4 where p = (p 1,..., p T h. Note that (3 is a special case of (4 with the uniform weights p u = ( 1 T h,..., 1 T h. p is to be estimated by ˆp = arg min p D(p subject to desired constraints and T h t=1 p t = 1, where D(p is a distance function that measure the discrepancy between p and p u, e.g. D(p = (p p u (p p u. An alternative is to choose D(p = (p 1/2 pu 1/2 (p 1/2 p 1/2 u = 2(1 p 1/2 pu 1/2 if the elements of p and p u are on the unit interval (0 1, e.g., probability weights. 4

7 2.2.2 Nonparametric Estimation with Constraints: Bagging LLLS Take the first order Taylor-Series expansion of m(x t around x so that y t = m(x t + u t = m(x + (x t xm (1 (x + v t = α(x + x t β(x + v t (5 = X t δ(x + v t where X t = (1 x t and δ(x = [α(x β(x ] with α(x = m(x xβ(x and β(x = m (1 (x. The Local Linear Least Square (LLLS estimator of δ(x is then obtained by minimizing n n vt 2 K h (x t, x = (y t X t δ(x 2 K h (x t, x and it is given by t=1 t=1 δ(x = (X K(xX 1 X K(xy. where X is an n (k + 1 matrix with the tth row X t (t = 1,..., n. Then we have LLLS estimator α(x := (1 0 δ(x and β(x := (0 1 δ(x. Apply bagging to LLLS estimator β(x := (0 1 δ(x, we get the bagged constrained local linear least squares estimator ˆβ(x : where β(x := max{ β(x, 0} ˆβ(x := E β(x = 1 J J β(x (j. Observing (1 and (2, we note that the unconstrained local linear estimator j=1 α(x = ȳ(x β(x x(x (6 ȳ(x = m(x = n t=1 K h(x t, xy t n t=1 K h(x t, x = (i K(xi 1 i K(xy x(x = n t=1 K h(x t, xx t n t=1 K h(x t, x = (i K(xi 1 i K(xx and x is an n 1 vector with elements x t (t = 1,..., n. Following similar steps, the two constrained LLLS estimators for α(x can be obtained as follows: ᾱ(x = ȳ(x β(x x(x ˆα(x = ȳ(x ˆβ(x x(x or ˆα(x := E ᾱ(x = 1 J J j=1 ᾱ(x (j 5

8 Averaged local estimators Once the local estimators for β(x are obtained from LLLS, such as β(x = β(x := (0 1 δ(x β(x := max{ β(x, 0} ˆβ(x := E 1 J β(x = β(x (j J and similarly for α(x, then we can form the average estimators of those local estimators as follows as in Li, Lu, and Ullah (2003: β NP-AVG := 1 T β NP-AVG := 1 T ˆβ NP-AVG := 1 T T β(x t or t=1 T β(x t or t=1 T β(x t or t=1 j=1 T β(x t ˆf(x t t=1 T β(x t ˆf(x t t=1 T β(x t ˆf(x t where the second averages are weighted by the estimated density. The convergence rate of the NP estimator β(x is nh, while the average-np estimator β NP-AVG has the convergence rate n. Alternatively, the constrained average estimators may be obtained as follows: t=1 β NP-AVG := max{ β NP-AVG, 0} ˆβ NP-AVG := E βnp-avg. 2.3 Semi-parametric Estimation with Constraints It has been noted by several work, for example, Glad (1998 and more recently Maartins-Filho, Mishra and Ullah (2009, that sensible parametric guided semiparametric (SP models outperform nonparametric (NP models in that the significant bias reduction is achieved while maintaining the asymptotic variance. Consider therefore SP models where y = α + βx + E(u x + [u E(u x] = α + βx + E(u x + v = m(x + v m(x = α + βx + E(u x. Then we have three estimators for the SP regressions: m sp (x = α + βx + E(ũ x m sp (x = ᾱ + βx + E(ū x ˆm sp (x = ˆα + ˆβx + E(û x 6

9 where ũ = y α βx ū = y ᾱ βx û = y ˆα ˆβx the residuals are obtained from parametric models of Section 2.1, and then the NP regression of E(ũ x E(ū x E(û x are to be estimated as in Section 2.2 using LLLS kernel estimators. 2.4 Nonparametric and Semiparametric Forecasts with Constraints Nonparametric Forecasts with Constraints We derive explicit formula for the nonparametric forecast for the estimators presented in Section 2.2. Note that from (5 we have the unconstrained nonparametric forecast, m (x = α(x + x β(x = ȳ(x β(x x(x + x β(x = ȳ(x β(x [ x(x x]. Parrelle to the above steps we have constrained nonparametric forecast, m (x = ᾱ(x + x β(x = ȳ(x β(x x(x + x β(x = ȳ(x β(x [ x(x x], and bagged constrained nonparametric forecast, ˆm (x = ˆα(x + x ˆβ(x = ȳ(x ˆβ(x x(x + x ˆβ(x = ȳ(x ˆβ(x [ x(x x] Semiparametric Forecasts with Constraints To derive an explicit formula for the semiparametric forecast with unconstrained parameter estimator, note that α = ȳ T β x T 7

10 ũ = y α βx = y (ȳ T β x T βx = (y ȳ T + β ( x T x and 3 m sp (x = ȳ (x + β (x x (x = t K h (x t, x y t t K + h (x t, x β t K h (x t, x (x x t t K h (x t, x = βx t K h (x t, x (y t βx t + t K. h (x t, x Similarly, we can show that ᾱ = ȳ T β x T ū = (y ȳ T + β ( x T x and m sp (x = ȳ (x + β (x x (x = t K h (x t, x y t t K + h (x t, x β t K h (x t, x (x x t t K h (x t, x t K h (x t, x ( y t βx t = βx + t K h (x t, x, ˆα = ȳ T ˆβ x T û = (y ȳ T + ˆβ ( x T x ˆm sp (x = ȳ (x + ˆβ (x x (x = t K h (x t, x y t t K + h (x t, x ˆβ t K h (x t, x (x x t t K h (x t, x = ˆβx t K h (x t, x (y t ˆβx t + t K. h (x t, x 3 Ẽ (y x = α + βx + E (ũ x = α + βx t + K h (x t, x ũ t t K h (x t, x [ = α + βx t K h (x t, x (y t ȳ T + β ] ( x T x + t K h (x t, x = α + βx + (ȳ (x ȳ T + β ( x T x (x = ȳ (x + β (x x (x 8

11 3 Sampling Properties of Constrained Estimators For the constrained estimators considered in previous sections, we establish their sampling properties such as mean, variance, mean squared errors and distribution, with comparison to the properties of their unconstrained counterparts. Intuitively, we have the following results, (1 The constrained estimator will be biased if the unconstrained one is unbiased. (2 The constrained estimator will have a smaller mean squared error and a smaller variance if the constraint imposed is correct, and (3 the constrained estimator will enjoy a skewed distribution compared to that of the unconstrained estimator. 3.1 Constrained Parametric Estimator Proposition 1. Let the estimator β of β have a cumulative density function denoted by F β (. Then we have the following for the constrained estimator β = max{ β, β 1 }, for some given constant β 1, { 1. F β (z = 0, if z < β 1 ; F β (z, if z β E β E β. 3. V ar ( ( β V ar β if E β β and β 1 β. 4. MSE ( ( β MSE β if β 1 β. Remark 1. The above proposition establishes that the constrained estimator has a condensed density and it is biased up. It will have smaller variance and mean squared error if the constraint imposed is correct, that is, β 1 β. For the case that the constraint is misspecified, variance and mean squared error comparison is unexplored. Note that these results hold without any assumption on the distribution of β. Judge and Yancey (1986 considered the case in which β has a normal distribution, for which they dipicted a figure (p.50 showing that, the performance of β relative to that of β depends on δ β 1 β. The constrained estimator is inferior for a large range values of δ, and when δ, MSE ( β is equal to the mean squared error of an equality constrained estimator, i.e. β = β1. Under the normality assumption, V ar ( ( β V ar β over the whole range of parameter space and the former will approach the variance of the equality constrained estimator. However, for the general case in which normality may not hold, the comparison is unclear. Proposition 2. Consider a parametric estimator β of β with γ (n σ 1 ( β β d Z and Z F (0, 1, where lim n γ (n =. The constrained estimator defined as β = max{ β, β 1 }, for some given constant β 1, has the following properties, 1. when β > β 1, γ (n ( β β d Z. 9

12 2. when β = β 1, Pr ( γ (n ( β β < z { [1 F (z] F (0 + F (z, for z > 0. F (z (1 F (0, for z 0 If we further assume that γ (n σ 1 (β β 1 = b, for some positive constant b, F is standard normal CDF and denote Z b = Z + b, then 3. lim n γ (n σ 1 [ β β ] = Zb 1 [Zb >0] b. 4. lim n γ (n σ 1 E [ ] β β = φ (b + bφ (b b. [ (γ 5. lim n V ar (n σ 1 ] 1/2 β = Φ (b+bφ (b φ 2 (b 2bφ (b Φ (b+b 2 Φ (b [1 Φ (b]. Remark 2. The implications from the above proposition are as follows. (i The constrained estimator will share the same asymptotic property as the unconstrained estimator as long as the strict inequality constraint is correctly specified via the truncation. (ii The constrained estimator will be consistent and will have a smaller asymptotic variance if the equality constraint is the truth. 3.2 Constrained Nonparametric Estimator Proposition 3. Let the nonparametric estimator β (x of β (x with γ 1 (n, h σ 1 β (x ( β (x β (x d Z, γ 2 (n, h σ 1 α (x ( m (x g (x d Z, where lim n γ i (n, h =, i = 1, 2, h is the bandwidth satisfying h = c (x n τ for some c (x > 0 and τ < 0, and Z N (0, 1. Then we have the following for the constrained estimator β (x = max{ β (x, β 1 (x}, for some given β 1 (x, 1. when β (x > β 1 (x, γ (n, h σ 1 β (x ( d β (x β (x Z. ( 2. when β (x = β 1 (x, Pr γ (n, h σ 1 β (x ( β (x β (x < z { [1 Φ (z] Φ (0 + Φ (z, for z > 0. Φ (z [(1 Φ (0], for z 0 3. γ 2 (n, h σ g 1 d (x [ m(x g (x] Z, if β (x > β 1 (x. If we further assume thatγ (n, h σ 1 (β(x β 1 (x = b(x, for some function b(x > 0, F is standard normal CDF and denote Z b(x = Z + b(x, then 4. lim n γ (n, h σ 1 [ β(x β(x ] = Zb(x 1 [Zb(x >0] b(x. 5. lim n γ (n, h σ 1 E [ β(x β(x ] = ϕ (b(x + b(xφ (b(x b(x. 10

13 [ (γ 6. lim n V ar (n, h σ 1 ] 1/2 β(x 2b(xϕ (b(x Φ (b(x + b 2 (xφ (b(x [1 Φ (b(x]. = Φ (b(x + b(xϕ (b(x ϕ 2 (b(x Remark 3(a. The above proposition shows the counterpart results for nonparametric estimators with constraints. The implications are similar to the previous proposition. Therefore, imposing correct restriction to to nonparametric estimation will lead to consistent and more efficient estimation. Note that the constraint bound β 1 (x can vary for different values of x. As a special case in which β 1 (x = β 1, a constant, it is efficient to adopt the restriction if it is correctly specified via the constrained estimator. The constrained estimator of g (x, ᾱ(x, have the asymptotic property as the usual unconstrained nonparametric estimator. Remark 3(b. The above result is silent about what β (x represents and the nonparametric methods used in finding the estimator β (x of β (x. Nonparametric estimators including density estimators (such as local histogram estimator, Rosenblatt-Parzen kernel estimator, nearest neighborhood estimator, variable window-width estimator, series estimator, penalized likelihood estimator, local log-likelihood estimator, etc., conditional moment estimators (such as localregression estimator, Nadaraya-Watson estimator, recursive kernel estimator, fixed design estimator, nearest neighborhood estimator, spline estimator, local polynomial estimator, etc., derivative estimators (partial derivative estimator [Ullah 1988; Rilstone and Ullah 1989; Mack and Muller 1989.], average derivative estimator[hardle and Stoker 1989; Rilstone 1991; Powell et al. 1989; Fan 1990.], local linear derivative estimator (Fan 1992, 1993; Fan and Gijbels 1992; Ruppert and Wand 1994, etc. are all special cases of β (x and therefore the results apply to all these estimators, when there is non-sample information in the form of inequality constraint. See Pagan and Ullah (1999 for details of the above estimators and Li and Racine (2007 for details on conditional density estimators and estimators with mixed data. 3.3 Constrained Semiparametric Estimator The semiparametric slope estimator β is show to converge to β, the parameter that minimize the Kullback-Leibler Information Criterion (KLIC, see White (1982, p.3. With prior information about β, our constrianed estimator β will preserve similar asymptotic property as in the parametric framework. Proposition 4. Consider an estimator β of β with γ (n σ 1 β (x ( β β d Z and Z N (0, 1, where lim n γ (n =. The constrained estimator defined as β = max{ β, β 1 }, for some given constant β 1, has the following properties, 1. when β > β 1, γ (n σ 1 β ( β β d Z. ( 2. when β = β 1, Pr γ (n σ 1 ( β β β < z { [1 Φ (z] Φ (0 + Φ (z, for z > 0. Φ (z [(1 Φ (0], for z 0 11

14 3. γ (n, h σ g 1 d (x [ m sp (x g (x] Z N (0, 1 regardless of model misspecification, for some γ (n, h with similar properties as that in Proposition 3 and σ g (x > 0. Remark 4. This theorem states that the constrained semiparametric estimator β of β will share the same asymptotic property as unconstrained estimator β. The powerful result in part 3 shows that the estimation of m (x via semiparametric method is always a consistent estimator of the true function g (x, independent of the specification of the model. While the nonparametric estimator possesses this property as well, the parametric estimator considered in Proposition 2 does not enjoy this nice property. 4 Sampling Properties of Bagged Constrained Estimators 4.1 Bagged Constrained Parametric Estimator Bühlmann and Yu (2002 analyzed the properties of indicator predictor of the form ˆθ n (x = 1 [ ˆdn x], x R, and its bagged predictor ˆθ n;b (x = E ˆθ n (x with the threshold ˆd n satisfying the following assumption: b n ( ˆdn d 0 d N ( 0, σ 2, [ ( sup P b n ˆd n ˆd n v Φ (v/σ ] = op (1, where (b n n N is an increasing sequence and ˆd n is a bootstrapped estimator of ˆd n, 0 < σ 2 <, Φ ( is the standard normal CDF. Lemma 5 (Bühlmann and Yu, Under the above assumption, the predictor ˆθ n (x has the following properties, with x = x n (c = d 0 + cσ b 1 n, 1. (a ˆθ n (x n (c d g (Z = 1 [Z c], ] (b lim n E [ˆθn (x n (c = Φ (c, ] (c lim n V ar [ˆθn (x n (c = Φ (c (1 Φ (c ; 2. (a ˆθ n;b (x n (c d g B (Z = Φ (c Z, ] (b lim n E [ˆθn;B (x n (c = Φ ϕ (c, 12

15 ] (c lim n V ar [ˆθn;B (x n (c = Φ 2 ϕ (c [Φ ϕ (c] 2. where Z N (0, 1, ϕ ( is standard normal pdf, and f g ( denotes the convolution of f and g. ] Remark 5. As a special case of the above result when c = 0, we V ar [ˆθn (x n (0 ] 1 4 [ˆθn and V ar (x n ( Therefore, the asymptotic variance is reduced by a factor of 3 by the bagging predictor ˆθ n;b (x n (c. Bühlmann and Yu noted that when c 0, bagging still reduces variances without much sacrifice on the bias. Bagging also finds application in variable selection in linear regression models. Bühlmann and Yu (2002 considered the simple linear regression model Y i = X i β + ε i, X 1,..., X n R-valued and i.i.d. with E X i 2 = 1, {ε i } i i.i.d. and independent from {X i } i, E [ε i ] = 1, V ar (ε i = σ 2 <, with predictor of interest where u n = u n (c = cσn 1/2. ˆθ n (x = ˆβ1 [ ˆβ >un] x, x R, Lemma 6 (Bühlmann and Yu, Assuming in the above model that β = β n (b = bσn 1/2 and E ε i 4 <, E X i 4 <, 1. n 1/2 σ 1 ˆθn (x d g (Z b = ( Z b Z b 1 [ Zb c], 2. n 1/2 σ 1 ˆθn;B (x d g B (Z b, where Z b = Z + b, Z N (0, 1, and g B (z = (z {zφ (c z ϕ (c z zφ ( c Z + ϕ ( c z} x. Remark 6. Note that when x = 1, the above lemma establishes the asymptotic property of parameter estimator ˆθ n (1 = ˆβ1 [ ˆβ >un] and that of its bagging version. Although the result is an application of bagging in selecting variables in regression model, we could adapt the result to our question of interest, where we estimate the parameter β with its lower bound known. We state the result in the next proposition. 13

16 Proposition 7. Let an estimator β of β and its bootstrapped version β have the following asymptotic, γ (n σ 1 ( β β d Z, with Z N (0, 1 and lim n γ (n =. known β 1 < β that satisfies γ (n σ 1 ( β β d Z, (7 γ (n σ 1 (β β 1 = b, Define β { } = max β, β1, with some where b is some positive constant. For the bagged version of β, ˆβ E β, we have 1. ( ˆβ = β 1 [1 Φ ( b Z] + β 1 Φ ( b Z + O n = β ( + β 1 β ( 1 Φ ( b Z + O + O n ( 1 γ (n ( 1 + O. γ (n 2. γ (n σ 1 ( ˆβ β d Z Zb Φ ( b Z + ϕ ( b Z. 3. (a E ˆβ ( = (1 Φ ϕ ( b β + β 1 Φ ϕ ( b + o (1 = β + O 1 γ(n β. (b ( V ar ˆβ = β 2 { 1 2Φ ϕ ( b + Φ 2 ϕ ( b } +2β 1 β { Φ ϕ ( b Φ 2 ϕ ( b } + β 2 1Φ 2 ϕ ( b [(1 Φ ϕ ( b β + β 1 Φ ϕ ( b] 2 + o (1 0. [ ] 4. (a lim n γ (n σ 1 E ˆβ β = 2ϕ ϕ ( b bφ ϕ ( b. (b [ (γ lim V ar (n σ 1 ] 1/2 ˆβ n = 1 + Φ 2 ϕ ( b + Φ 2 ϕ ( b 2bΦ 2 ϕ ( b + b 2 Φ 2 ϕ ( b +ϕ 2 ϕ ( b 2Φ ϕ ( b 2Φ ϕ ( b + 2bΦ ϕ ( b 2ϕ ϕ ( b + 2 (Φ ϕ ϕ ( b 2b (Φ ϕ ϕ ( b [2ϕ ϕ ( b bφ ϕ ( b] 2. 14

17 Remark 7. We used the notation f g to denote the convolution of two functions f and g, which is defined as f g (s = f (t g (s t ds. In case that β and β have the standard convergence rate 1 n, we know from (1 that ˆβ = β + O ( 1 n, while it is implied from Proposition 2 that β = β +O ( 1 n. That is, ˆβ converges to β at the same rate than β does. It is clear from part 4 of Proposition 7 that both bias and variance of the bagging constrained estimator depend on the parameter b, which measures how accurate, the lower bound of β, β 1 is. Figure 1 plots the variance, squared bias and mean squared error of ˆβ together with those of β, against values of b in the range of [ 1, 5]. It is seen that our bagging estimator enjoys a large reduction in mean squared error for values of b [1, 3]. Figure 1: Asymptotic variance, squared bias, and mean squared error of constrained estimator and bagging constrained estimator 4.2 Bagged Constrained Nonparametric Estimator Proposition 8. Let an estimator β (x of β (x and its bootstrapped version β (x have the following asymptotic, γ 1 (n, h σ (x 1 ( β (x β (x d Z, γ 2 (n, h σ (x 1 ( β (x β (x d Z, (8 with Z N (0, 1, lim n γ i (n, h =, i = 1, 2 and h being the bandwidth. Define β { } (x = max β (x, β1 (x, with some known β 1 (x < β (x that satisfies γ 1 (n, h σ (x 1 (β (x β 1 (x = b (x, where b ( is some positive function. E β (x, we have For the bagged version of β (x, ˆβ (x 15

18 1. ˆβ (x = β (x [1 Φ ( b (x Z] + β 1 (x Φ ( b (x Z ( ( 1 1 +O + o n γ (n, h = β ( (x + β 1 (x β (x Φ ( b (x Z ( ( 1 1 +O + o. n γ (n, h 2. γ 1 (n, h σ (x 1 ( ˆβ (x β (x d Z [1 Φ ( b (x Z] + ϕ ( b (x Z. 3. (a E ˆβ (x = (1 Φ ϕ ( b (x β (x+β 1 (x Φ ϕ ( b (x+o (1 β (x. (b ( V ar ˆβ (x = β (x 2 { 1 2Φ ϕ ( b (x + Φ 2 ϕ ( b (x } 0. +2β 1 (x β (x { Φ ϕ ( b (x Φ 2 ϕ ( b (x } +β 2 1 (x Φ 2 ϕ ( b (x [(1 Φ ϕ ( b (x β (x + β 1 (x Φ ϕ ( b (x] 2 +o (1 [ ] 4. (a lim n γ 1 (n, h σ 1 E ˆβ (x β (x = 2ϕ ϕ ( b (x b (x Φ ϕ ( b (x. (b [ (γ1 lim V ar (n, h σ 1 ] 1/2 ˆβ (x n = 1 + Φ 2 ϕ ( b (x + Φ 2 ϕ ( b (x 2bΦ 2 ϕ ( b (x + b 2 (x Φ 2 ϕ ( b (x +ϕ 2 ϕ ( b (x 2Φ ϕ ( b (x 2Φ ϕ ( b (x + 2b (x Φ ϕ ( b (x 2ϕ ϕ ( b (x + 2 (Φ ϕ ϕ ( b (x 2b (x (Φ ϕ ϕ ( b (x [2ϕ ϕ ( b (x b (x Φ ϕ ( b (x] γ 2 (n, h σg 1 d (x [ ˆm(x g (x] Z. Remark 8. When b ( admits a constant function, the limiting distribution in part (2 is the same ( as in parametric case. That is, for all possible values of x, ( γ 1 (n, h σ (x 1 ˆβ (x β (x converges to the same random variable as γ (n σ 1 ˆβ β does in parametric case. 16

19 4.3 Bagged Constrained Semiparametric Estimator Martins-Filho, Mishra and Ullah (2007 considered a class of improved parametrically guided estimator of the form m (x i = m (x i, θ + r u (x i, θ m (x i, θ τ, where m (x i, θ is some guided functional form, r u (x i, θ = [m (x i m (x i, θ] /m (x i, θ τ and τ R. They proposed a two-step estimation procedure with a method to decide the parameter τ, and derived the optimal rate and asymptotic normality results. See Martins-Filho, etc. (2007 for details for the methodology and the properties of estimator of m (x in the general class. In this general class, we consider the linear parametric additive case as in which τ = 0. Proposition 9. Let an estimator β of β and its bootstrapped version β have the following asymptotic, γ (n σ 1 ( β β d Z, with Z N (0, 1 and lim n γ (n =. known β 1 < β that satisfies γ (n σ 1 ( β β d Z, (9 γ (n σ 1 (β β 1 = b, Define β { } = max β, β1, with some where b is some positive constant. For the semiparametric estimator of g (x, ˆm sp (x, that constructed using the bagged version of β, ˆβ E β, we have, γ (n σ 1 ( ˆβ β d Z [1 Φ ( b Z] + ϕ ( b Z. γ (n, h ˆσ g 1 d (x [ ˆm sp (x g (x] Z. 5 Simulation We performed Monte Carlo exercises in this section to examine the finite sample properties of our proposed bagged nonparametric and semiparametric estimators, while it is self-evident that there is reduction in mean squared error for the bagged estimators in parametric cases as show in Proposition 1. Besides the estimators introduced in section 2, we also consider another Monotonic Nonparametric (MNP Estimator, which is derived via a two-step procedure: (i Apply Dykstra s 4 (1983 constrained least square algorithm subject to monotonic constraint to transform the original data {X i, y i } N i=1 into {X i, m i } N i=1, such that for any i, j, X i X j m i m j. (ii Use Nadaraya-Watson s estimator to estimate 4 Dykstra s algorithm is described in the appendix A 17

20 m (x = E (y x using the transformed data {X i, m i } N i=1 instead of the original data {X i, y i } N i=1. See Mammen (1991 for asymptotic properties of this estimator. We also conjecture that the bagging version of MNP would further reduce the variance since the first stage monotonic transformation is sensitive to sample available, which results step functions quite often. We consider the following three Data Generating Processes (DGP that feature monotonicity in the conditional mean of y t given x t on [0, 1], DGP 1 : y t+1 = 3 + e t DGP 2 : y t+1 = x t + e t DGP 3 : y t+1 = (4x t e t DGP 4 : y t+1 = arctan [10 (2x t 1] + e t where e t i.i.d.n (0, 1, and x t is generated via an AR (1 process according to u t = ρu t 1 + ε t, ε t i.i.d.n (0, 1, x t = u t min i {u i } max i {u i } min i {u i } so that x t is transformed to be inside the closed interval [0, 1]. We considered values of ρ taken from the set {0, 0.2, 0.5, 0.8} to allow different level of persistence in the regressor. A few words are immediate for the above presented processes. The DGP 1 and 2 are linear in x and it is designed to see how will nonparametric and semiparametric models perform when linearity is not taken as prior modeling information. DGP 3 and DGP 4 are nonlinear in x, neither are concave nor convex (but both are quasiconcave and quasiconvex on [0, 1]. The first purpose of these two designs is to show how severe misspecification of linear model could be. The second objective is to compare the performance of nonparametric and semiparametric models with monotonicity as a prior and their bagging counterparts. We replicate the process for 200 times, with 99 bootstrap samples taken for the bagged estimators in each replication. We take in sample observations of size N = 25, 50, 100, 200, respectively, for estimation and keep the last 50 sample observations for model evaluation. For the nonparametric and semiparametric estimators, we use cross-validation to select the bandwidth that minimize the integrated mean squared error (IMSE and use this same bandwidth for the bootstrap sample generated with in one replication. The mean of Mean Squared Errors (MSEs (averaged over evaluating data points among 200 replications are computed for our proposed estimators. Further, we compute the percentage reduction in the MSE of our proposed models (MSE M relative to that of linear model (MSE L by the following formula, ( 1 MSE M 100. MSE L We report the above measure for three DGPs in Table 1, 2, 3, respectively. Several findings revealed from the simulation are in sequence. For DGP 1 and 2, we have that (1 imposing positive slope constraint always reduces MSE for all 18

21 nonparametric models; (2 although (1 is not true for linear model, the bagging positive slope estimator always improves over the parametric linear model; (3 the true design can be beaten by nonparametric models such as SP-PS, BSP-PS, AL- PS, BAL-PS, NP-PS, BNP-PS; (4 SP-PS and AL-PS are even the best models for several cases in DGP 1; (4 BL-PS is the best model overall in terms of MSE. For DGP 2, we find that (1 most of the cases, nonparametric models and semiparametric models outperform linear model as expected; (2 NP-PS has the most predictive power in 6 out of 16 cases; (3 SP is the best model in 7 out of 16 cases, which confirms the power of parametrically guided semiparametric model as reported by Martins-Filho, Mishra and Ullah (2007; (4 prior information on the positiveness of slope helps to reduce MSE for nonparametric models; (5 bagging does not effect to reduce MSE of constrained estimators as expected. For DGP 3, the results show that (1 nonparametric and semiparametric models outperform the misspecified linear model; (2 the constrained models outperform their corresponding counterparts; (3 contradictory to the finding (5 for DGP 2, adopting bagging always improves the constrained estimators; (4 BNP-PS remains the best model for 5 cases; (5 Dykstra s algorithm works pretty well for DGP 3. Though there are quite interesting findings reported, it would be more appealing to examine the above methods in real economic applications, where the true model is not know to us. In the next section, we will examine the performance of these models in the forecast of U.S. equity premium. 6 Application: Predicting the Equity Premium As noted by Fama and French (2002, Equity Premium (which is the difference between the expected return on the market portfolio of common stocks and the rate of return on risk-free assets such as bonds or bills plays an important role in portfolio allocation decisions, estimates of the cost of capital, the debate about the advantages of investing Social Security funds in stocks and other economic and financial applications. However, the predictability of equity premium has been an unsettled issue in the financial literature since the work of Fama and French (1988, Campbell (1991, and Cochrane (1992. Goyal and Welch (2008 examined the predictors suggested as good instruments in the literature and reported their poor performance in both in-sample and out-ofsample forecasts, as argued to be beaten by the the historical average stock return. Campbell and Thompson (2008 introduced the perspective of a real-world investor who will instead impose some restriction on the regression coefficient such that it has the theoretically expected sign. This simple but sensible sign constraint leads to a better our-of-sample performance of predictors that have significant in-sample forecasting power. Chen and Hong (2009 went further to argue that the sign restriction imposed by Campbell and Thompson (2008 is a form of nonlinearity and suggested to use nonparametric methods instead of linear models to form forecast 19

22 a Table 1: Percentage Gain of Mean Squared Errors Compared to Linear model: DGP 1 ρ = 0 ρ = 0.2 ρ = 0.5 ρ = 0.8 N L-PS BL-PS NP NP-PS BNP-PS HH SP SP-PS BSP-PS D B-D AL AL-PS BAL-PS a Model: HM historical mean, L linear model, NP nonparametric model, AL average local estimator D NP with Dykstra s algorithm, SP semiparametric model HH Hall and Huang (2001 PS positive slope constraint, B bagging. 20

23 a Table 2: Percentage Gain of Mean Squared Errors Compared to Linear model: DGP 2 ρ = 0 ρ = 0.2 ρ = 0.5 ρ = 0.8 N L-PS BL-PS NP NP-PS BNP-PS HH SP SP-PS BSP-PS D B-D AL AL-PS BAL-PS a Model: HM historical mean, L linear model, NP nonparametric model, AL average local estimator D NP with Dykstra s algorithm, SP semiparametric model HH Hall and Huang (2001 PS positive slope constraint, B bagging. 21

24 a Table 3: Percentage Gain of Mean Squared Errors Compared to Linear model: DGP 3 ρ = 0 ρ = 0.2 ρ = 0.5 ρ = 0.8 N L-PS BL-PS NP NP-PS BNP-PS HH SP SP-PS BSP-PS D B-D AL AL-PS BAL-PS a Model: HM historical mean, L linear model, NP nonparametric model, AL average local estimator D NP with Dykstra s algorithm, SP semiparametric model HH Hall and Huang (2001 PS positive slope constraint, B bagging. 22

25 a Table 4: Percentage Gain of Mean Squared Errors Compared to Linear model: DGP 4 ρ = 0 ρ = 0.2 ρ = 0.5 ρ = 0.8 N L-PS BL-PS NP NP-PS BNP-PS HH SP SP-PS BSP-PS D B-D AL AL-PS BAL-PS a Model: HM historical mean, L linear model, NP nonparametric model, AL average local estimator D NP with Dykstra s algorithm, SP semiparametric model HH Hall and Huang (2001 PS positive slope constraint, B bagging. 23

26 of stock returns, which confirms the conclusion of Campbell and Thompson. Other approaches adopted in the literature includes, for example, the adaptive forecast combination by Timmermann (2008. As an alternative to these approaches, we use bagging to impose sign restriction on the slope coefficient in estimation of the nonparametric and semiparametric forecast models. For comparison purpose, we consider the linear models adopted by Goyal and Welch (2008, Campbell and Thompson (2009, Hillebrand etc. (2009, our proposed nonparametric and semiparametric models with constraints and their bagging counterparts. Our comparison focuses on the out-of-sample forecast mean squared errors (FMSEs of the earlier mentioned models relative to that produced by historical average return forecast. 6.1 Forecast Framework For a given predictor x t, we are interested in forming a h-step ahead forecast y T +h = g T,h (x I t, where I t = {x 1,..., x T, y 1,..., y T }. We consider several popular models in the literature, besides our proposed models, for comparison. Linear models including (1 historical mean model (HM, in which, g T,h (x I t = 1 T y t T and (2 simple linear regression model (L, g T,h (x I t = α + βx, ( where α, β is the OLS estimator in the regression of y on x (including a constant term. (3 Linear model with positive slope restriction (L-PS admits g T,h (x I t = ᾱ + βx, where β ( = max β, 0 and ᾱ = ȳ T β x T.And (4 its bagged version (BL-PS, g T,h (x I t = ˆα + ˆβx, where ˆβ = 1 n β n s=1 s, β ( = max β, 0 and ˆα = ȳ T ˆβ x T. t=1 Nonparametric models including (5 LLLS forecast (NP g T,h (x I t = ȳ(x β(x [ x(x x], (6 LLLS forecast with positive slope constraint (NP-PS g T,h (x I t = ȳ(x β(x [ x(x x], 24

27 (7 bagged LLLS forecast with positive slope constraint (BNP-PS g T,h (x I t = ȳ(x ˆβ(x [ x(x x], and (8 the model proposed by Hall and Huang (2001 (HH, g T,h (x; p I t = T h t=1 Semiparametric models include (9 (SP (10 (SP-PS and (11 (BSP-PS p t A t (xy t+h, g T,h (x I t = ȳ(x β [ x(x x], g T,h (x I t = ȳ(x β [ x(x x], g T,h (x I t = ȳ(x ˆβ [ x(x x]. Also, we also include (12 Monotonic Nonparametric models with Dykstra s algorithm (D in the first step and (13 its bagging counterpart (BD, and nonparametric models with averaged local estimators include (14 (AL (15 (AL-PS and (16 (BAL-PS g T,h (x I t = ȳ(x β NP AV G (x [ x(x x], g T,h (x I t = ȳ(x β NP AV G (x [ x(x x], g T,h (x I t = ȳ(x ˆβ NP AV G (x [ x(x x], 6.2 Data Description The most prominent predictors to forecast equity premium proposed in the literature include the dividend price ratio and dividend yield, the earnings price ratio and dividend-earnings (payout ratio, various interest rates and spreads, the inflation rates, the book-to-market ratio, volatility, the investment-capital ratio, the consumption, wealth, and income ratio, and aggregate net or equity issuing activity. The dependent variable we consider is always the equity premium, defined as the difference between the total rate of return on the stock market and the prevailing short-term interest rate. Stock Returns: S&P 500 index returns from 1926 to 2005 are taken from Center for Research in Security Press (CRSP month-end values. Stock returns used are the continuously compounded returns on the S&P 500 index, including dividends. Yearly and longer frequencies data from 1872 to 2005 are available from Robert Shiller s website 5. Our quarterly data consist of price return (capital gain only, 5 Robert Shiller s website: shiller/data.htm. 25

28 total returns (capital gain plus dividend, and dividends on the Standard & Poor Composite Index from March 1936 to December 2001, which is obtained from Standard & Poor Statistical Service. Monthly returns on S&P 500 index from January 1970 to December 2006 are taken from CRSP monthend values. Monthly dividends on the S&P 500 index are from Standard & Poor Statistical Service. Risk-free Rate: The risk-free rate from 1920 to 2005 is the Treasury-bill rate. We use Commercial Paper rates from National Bureau of Economic Research (NBER Macrohistory data base to estimate T-bill rate prior to the 1920 as done in Goyal and Welch (2008. For quarterly and monthly data, T-bill rates from 1934 to 2005 are the 3-Month Treasury Bill: the Secondary Market Rate from the economic research data base at the Federal Reserve Bank at St. Louis (FRED. The first set of independent variables are valuation ratios: Dividend Price Ratio: Dividends are 12-month moving sums of dividends paid on the S&P 500 index. The data are from Robert Shiller s website from 1871 to Dividends from 1988 to 2005 are from the S&P Corporation. The Dividend Price Ratio (d/p is the difference between the log of dividends and the log of prices. Earnings Price Ratio: Earnings are 12-month moving sums of earnings on the S&P 500 index. The data are again from Robert Shiller s website from 1871to Earnings from 1988 to 2005 are estimates from Goyal and Welch (2008, based on interpolation of quarterly earnings provided by the S&P Corporation. The Earnings Price Ratio (e/p is the difference between the log of earnings and the log of prices. Book-to-Market ratio: The Book to Market Ratio (b/m is the ratio of book value to market value for the Dow Jones Industrial Average. For January and February, this is calculated as a ratio of book value at the end of two years ago and the price at the end of the current month. For the months from March to December, it is computed by dividing book value at the end of the previous year by the price at the end of the current month. See Kothari and Shanken (1997 and Pontiff and Schall (1998. Book values from 1920 to 2005 are from Value Line website. Our second group of independent variables include nominal interests and inflation: Treasury Bill: Treasury-bill rates from 1920 to 1933 are the U.S. Yields On Short-Term United States Securities, Three-Six Month Treasury Notes and Certificates, Three Month Treasury series in the NBER Macro history data base. Treasury-bill rates from 1934 to 2005 are the 3-Month Treasury Bill: Secondary Market Rate from the economic research data base at the Federal Reserve Bank at St. Louis (FRED. Long Term Yield: Our long-term government bond yield data from 1919 to 1925 is the U.S. Yield On Long-Term United States Bonds series in the NBER s Macro history data base. Yields from 1926 to 2005 are from Ibbotson s Stocks, Bonds, Bills and Inflation Yearbook. Long Term Return: Long Term Returns are also from Ibbotson s Stocks, Bonds, Bills and Inflation Yearbook. Term Spread: The Term Spread (tms is the difference between the long term 26

Nonparametric and Semiparametric Regressions Subject to Monotonicity Constraints: Estimation and Forecasting

Nonparametric and Semiparametric Regressions Subject to Monotonicity Constraints: Estimation and Forecasting Nonparametric and Semiparametric Regressions Subject to Monotonicity Constraints: Estimation and Forecasting Tae-Hwy Lee Department of Economics University of California, Riverside Aman Ullah Department

More information

Functional Coefficient Models for Nonstationary Time Series Data

Functional Coefficient Models for Nonstationary Time Series Data Functional Coefficient Models for Nonstationary Time Series Data Zongwu Cai Department of Mathematics & Statistics and Department of Economics, University of North Carolina at Charlotte, USA Wang Yanan

More information

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas 0 0 5 Motivation: Regression discontinuity (Angrist&Pischke) Outcome.5 1 1.5 A. Linear E[Y 0i X i] 0.2.4.6.8 1 X Outcome.5 1 1.5 B. Nonlinear E[Y 0i X i] i 0.2.4.6.8 1 X utcome.5 1 1.5 C. Nonlinearity

More information

Nonparametric Methods

Nonparametric Methods Nonparametric Methods Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania July 28, 2009 Michael R. Roberts Nonparametric Methods 1/42 Overview Great for data analysis

More information

Monitoring Forecasting Performance

Monitoring Forecasting Performance Monitoring Forecasting Performance Identifying when and why return prediction models work Allan Timmermann and Yinchu Zhu University of California, San Diego June 21, 2015 Outline Testing for time-varying

More information

Nonparametric Econometrics

Nonparametric Econometrics Applied Microeconometrics with Stata Nonparametric Econometrics Spring Term 2011 1 / 37 Contents Introduction The histogram estimator The kernel density estimator Nonparametric regression estimators Semi-

More information

DELAWARE COMPENSATION RATING BUREAU, INC. Internal Rate Of Return Model

DELAWARE COMPENSATION RATING BUREAU, INC. Internal Rate Of Return Model Exhibit 9 As Filed DELAWARE COMPENSATION RATING BUREAU, INC. Internal Rate Of Return Model The attached pages present exhibits and a description of the internal rate of return model used in deriving the

More information

Regression and Inference Under Smoothness Restrictions

Regression and Inference Under Smoothness Restrictions Regression and Inference Under Smoothness Restrictions Christopher F. Parmeter 1 Kai Sun 2 Daniel J. Henderson 3 Subal C. Kumbhakar 4 1 Department of Agricultural and Applied Economics Virginia Tech 2,3,4

More information

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Robert V. Breunig Centre for Economic Policy Research, Research School of Social Sciences and School of

More information

Duration-Based Volatility Estimation

Duration-Based Volatility Estimation A Dual Approach to RV Torben G. Andersen, Northwestern University Dobrislav Dobrev, Federal Reserve Board of Governors Ernst Schaumburg, Northwestern Univeristy CHICAGO-ARGONNE INSTITUTE ON COMPUTATIONAL

More information

Using all observations when forecasting under structural breaks

Using all observations when forecasting under structural breaks Using all observations when forecasting under structural breaks Stanislav Anatolyev New Economic School Victor Kitov Moscow State University December 2007 Abstract We extend the idea of the trade-off window

More information

PENNSYLVANIA COMPENSATION RATING BUREAU F CLASS FILING INTERNAL RATE OF RETURN MODEL

PENNSYLVANIA COMPENSATION RATING BUREAU F CLASS FILING INTERNAL RATE OF RETURN MODEL F Class Exhibit 4 Proposed 10/1/16 PENNSYLVANIA COMPENSATION RATING BUREAU F CLASS FILING INTERNAL RATE OF RETURN MODEL The attached pages present exhibits and a description of the internal rate of return

More information

Combining Macroeconomic Models for Prediction

Combining Macroeconomic Models for Prediction Combining Macroeconomic Models for Prediction John Geweke University of Technology Sydney 15th Australasian Macro Workshop April 8, 2010 Outline 1 Optimal prediction pools 2 Models and data 3 Optimal pools

More information

Estimation of cumulative distribution function with spline functions

Estimation of cumulative distribution function with spline functions INTERNATIONAL JOURNAL OF ECONOMICS AND STATISTICS Volume 5, 017 Estimation of cumulative distribution function with functions Akhlitdin Nizamitdinov, Aladdin Shamilov Abstract The estimation of the cumulative

More information

A nonparametric method of multi-step ahead forecasting in diffusion processes

A nonparametric method of multi-step ahead forecasting in diffusion processes A nonparametric method of multi-step ahead forecasting in diffusion processes Mariko Yamamura a, Isao Shoji b a School of Pharmacy, Kitasato University, Minato-ku, Tokyo, 108-8641, Japan. b Graduate School

More information

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract Journal of Data Science,17(1). P. 145-160,2019 DOI:10.6339/JDS.201901_17(1).0007 WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION Wei Xiong *, Maozai Tian 2 1 School of Statistics, University of

More information

Ultra High Dimensional Variable Selection with Endogenous Variables

Ultra High Dimensional Variable Selection with Endogenous Variables 1 / 39 Ultra High Dimensional Variable Selection with Endogenous Variables Yuan Liao Princeton University Joint work with Jianqing Fan Job Market Talk January, 2012 2 / 39 Outline 1 Examples of Ultra High

More information

Uniform Inference for Conditional Factor Models with Instrumental and Idiosyncratic Betas

Uniform Inference for Conditional Factor Models with Instrumental and Idiosyncratic Betas Uniform Inference for Conditional Factor Models with Instrumental and Idiosyncratic Betas Yuan Xiye Yang Rutgers University Dec 27 Greater NY Econometrics Overview main results Introduction Consider a

More information

Forecasting the term structure interest rate of government bond yields

Forecasting the term structure interest rate of government bond yields Forecasting the term structure interest rate of government bond yields Bachelor Thesis Econometrics & Operational Research Joost van Esch (419617) Erasmus School of Economics, Erasmus University Rotterdam

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

The Slow Convergence of OLS Estimators of α, β and Portfolio. β and Portfolio Weights under Long Memory Stochastic Volatility

The Slow Convergence of OLS Estimators of α, β and Portfolio. β and Portfolio Weights under Long Memory Stochastic Volatility The Slow Convergence of OLS Estimators of α, β and Portfolio Weights under Long Memory Stochastic Volatility New York University Stern School of Business June 21, 2018 Introduction Bivariate long memory

More information

interval forecasting

interval forecasting Interval Forecasting Based on Chapter 7 of the Time Series Forecasting by Chatfield Econometric Forecasting, January 2008 Outline 1 2 3 4 5 Terminology Interval Forecasts Density Forecast Fan Chart Most

More information

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014 Warwick Business School Forecasting System Summary Ana Galvao, Anthony Garratt and James Mitchell November, 21 The main objective of the Warwick Business School Forecasting System is to provide competitive

More information

Regression: Ordinary Least Squares

Regression: Ordinary Least Squares Regression: Ordinary Least Squares Mark Hendricks Autumn 2017 FINM Intro: Regression Outline Regression OLS Mathematics Linear Projection Hendricks, Autumn 2017 FINM Intro: Regression: Lecture 2/32 Regression

More information

Additive Isotonic Regression

Additive Isotonic Regression Additive Isotonic Regression Enno Mammen and Kyusang Yu 11. July 2006 INTRODUCTION: We have i.i.d. random vectors (Y 1, X 1 ),..., (Y n, X n ) with X i = (X1 i,..., X d i ) and we consider the additive

More information

A Simple Nonlinear Predictive Model for Stock Returns

A Simple Nonlinear Predictive Model for Stock Returns ISSN 1440-771X Department of Econometrics and Business Statistics http://business.monash.edu/econometrics-and-businessstatistics/research/publications A Simple Nonlinear Predictive Model for Stock Returns

More information

Testing for Regime Switching in Singaporean Business Cycles

Testing for Regime Switching in Singaporean Business Cycles Testing for Regime Switching in Singaporean Business Cycles Robert Breunig School of Economics Faculty of Economics and Commerce Australian National University and Alison Stegman Research School of Pacific

More information

Out-of-Sample Return Predictability: a Quantile Combination Approach

Out-of-Sample Return Predictability: a Quantile Combination Approach Out-of-Sample Return Predictability: a Quantile Combination Approach Luiz Renato Lima a and Fanning Meng a August 8, 2016 Abstract This paper develops a novel forecasting method that minimizes the effects

More information

Robustness to Parametric Assumptions in Missing Data Models

Robustness to Parametric Assumptions in Missing Data Models Robustness to Parametric Assumptions in Missing Data Models Bryan Graham NYU Keisuke Hirano University of Arizona April 2011 Motivation Motivation We consider the classic missing data problem. In practice

More information

Predictive Regression and Robust Hypothesis Testing: Predictability Hidden by Anomalous Observations

Predictive Regression and Robust Hypothesis Testing: Predictability Hidden by Anomalous Observations Predictive Regression and Robust Hypothesis Testing: Predictability Hidden by Anomalous Observations Fabio Trojani University of Lugano and Swiss Finance Institute fabio.trojani@usi.ch Joint work with

More information

A Semi-Parametric Measure for Systemic Risk

A Semi-Parametric Measure for Systemic Risk Natalia Sirotko-Sibirskaya Ladislaus von Bortkiewicz Chair of Statistics C.A.S.E. - Center for Applied Statistics and Economics Humboldt Universität zu Berlin http://lvb.wiwi.hu-berlin.de http://www.case.hu-berlin.de

More information

Nonparametric Estimation in a One-Way Error Component Model: A Monte Carlo Analysis

Nonparametric Estimation in a One-Way Error Component Model: A Monte Carlo Analysis Nonparametric Estimation in a One-Way Error Component Model: A Monte Carlo Analysis Daniel J. Henderson Department of Economics State University of New York at Binghamton Aman Ullah Department of Economics

More information

Modelling Non-linear and Non-stationary Time Series

Modelling Non-linear and Non-stationary Time Series Modelling Non-linear and Non-stationary Time Series Chapter 2: Non-parametric methods Henrik Madsen Advanced Time Series Analysis September 206 Henrik Madsen (02427 Adv. TS Analysis) Lecture Notes September

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

Nonparametric Regression Härdle, Müller, Sperlich, Werwarz, 1995, Nonparametric and Semiparametric Models, An Introduction

Nonparametric Regression Härdle, Müller, Sperlich, Werwarz, 1995, Nonparametric and Semiparametric Models, An Introduction Härdle, Müller, Sperlich, Werwarz, 1995, Nonparametric and Semiparametric Models, An Introduction Tine Buch-Kromann Univariate Kernel Regression The relationship between two variables, X and Y where m(

More information

University of Pretoria Department of Economics Working Paper Series

University of Pretoria Department of Economics Working Paper Series University of Pretoria Department of Economics Working Paper Series Predicting Stock Returns and Volatility Using Consumption-Aggregate Wealth Ratios: A Nonlinear Approach Stelios Bekiros IPAG Business

More information

Transformation and Smoothing in Sample Survey Data

Transformation and Smoothing in Sample Survey Data Scandinavian Journal of Statistics, Vol. 37: 496 513, 2010 doi: 10.1111/j.1467-9469.2010.00691.x Published by Blackwell Publishing Ltd. Transformation and Smoothing in Sample Survey Data YANYUAN MA Department

More information

Predicting bond returns using the output gap in expansions and recessions

Predicting bond returns using the output gap in expansions and recessions Erasmus university Rotterdam Erasmus school of economics Bachelor Thesis Quantitative finance Predicting bond returns using the output gap in expansions and recessions Author: Martijn Eertman Studentnumber:

More information

SINGLE-STEP ESTIMATION OF A PARTIALLY LINEAR MODEL

SINGLE-STEP ESTIMATION OF A PARTIALLY LINEAR MODEL SINGLE-STEP ESTIMATION OF A PARTIALLY LINEAR MODEL DANIEL J. HENDERSON AND CHRISTOPHER F. PARMETER Abstract. In this paper we propose an asymptotically equivalent single-step alternative to the two-step

More information

Calibration Estimation of Semiparametric Copula Models with Data Missing at Random

Calibration Estimation of Semiparametric Copula Models with Data Missing at Random Calibration Estimation of Semiparametric Copula Models with Data Missing at Random Shigeyuki Hamori 1 Kaiji Motegi 1 Zheng Zhang 2 1 Kobe University 2 Renmin University of China Econometrics Workshop UNC

More information

Nonparametric Modal Regression

Nonparametric Modal Regression Nonparametric Modal Regression Summary In this article, we propose a new nonparametric modal regression model, which aims to estimate the mode of the conditional density of Y given predictors X. The nonparametric

More information

Independent and conditionally independent counterfactual distributions

Independent and conditionally independent counterfactual distributions Independent and conditionally independent counterfactual distributions Marcin Wolski European Investment Bank M.Wolski@eib.org Society for Nonlinear Dynamics and Econometrics Tokyo March 19, 2018 Views

More information

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions Shin-Huei Wang and Cheng Hsiao Jan 31, 2010 Abstract This paper proposes a highly consistent estimation,

More information

Department of Economics, Vanderbilt University While it is known that pseudo-out-of-sample methods are not optimal for

Department of Economics, Vanderbilt University While it is known that pseudo-out-of-sample methods are not optimal for Comment Atsushi Inoue Department of Economics, Vanderbilt University (atsushi.inoue@vanderbilt.edu) While it is known that pseudo-out-of-sample methods are not optimal for comparing models, they are nevertheless

More information

Multiple Regression Analysis. Part III. Multiple Regression Analysis

Multiple Regression Analysis. Part III. Multiple Regression Analysis Part III Multiple Regression Analysis As of Sep 26, 2017 1 Multiple Regression Analysis Estimation Matrix form Goodness-of-Fit R-square Adjusted R-square Expected values of the OLS estimators Irrelevant

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

Time-varying sparsity in dynamic regression models

Time-varying sparsity in dynamic regression models Time-varying sparsity in dynamic regression models Professor Jim Griffin (joint work with Maria Kalli, Canterbury Christ Church University) University of Kent Regression models Often we are interested

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

LASSO-type penalties for covariate selection and forecasting in time series

LASSO-type penalties for covariate selection and forecasting in time series LASSO-type penalties for covariate selection and forecasting in time series Evandro Konzen 1 Flavio A. Ziegelmann 2 Abstract This paper studies some forms of LASSO-type penalties in time series to reduce

More information

Predictive Regressions: A Reduced-Bias. Estimation Method

Predictive Regressions: A Reduced-Bias. Estimation Method Predictive Regressions: A Reduced-Bias Estimation Method Yakov Amihud 1 Clifford M. Hurvich 2 November 28, 2003 1 Ira Leon Rennert Professor of Finance, Stern School of Business, New York University, New

More information

Mixed frequency models with MA components

Mixed frequency models with MA components Mixed frequency models with MA components Claudia Foroni a Massimiliano Marcellino b Dalibor Stevanović c a Deutsche Bundesbank b Bocconi University, IGIER and CEPR c Université du Québec à Montréal September

More information

Research Division Federal Reserve Bank of St. Louis Working Paper Series

Research Division Federal Reserve Bank of St. Louis Working Paper Series Research Division Federal Reserve Bank of St. Louis Working Paper Series Asymptotic Inference for Performance Fees and the Predictability of Asset Returns Michael W. McCracken and Giorgio Valente Working

More information

Discussion of the paper Inference for Semiparametric Models: Some Questions and an Answer by Bickel and Kwon

Discussion of the paper Inference for Semiparametric Models: Some Questions and an Answer by Bickel and Kwon Discussion of the paper Inference for Semiparametric Models: Some Questions and an Answer by Bickel and Kwon Jianqing Fan Department of Statistics Chinese University of Hong Kong AND Department of Statistics

More information

Web-based Supplementary Material for. Dependence Calibration in Conditional Copulas: A Nonparametric Approach

Web-based Supplementary Material for. Dependence Calibration in Conditional Copulas: A Nonparametric Approach 1 Web-based Supplementary Material for Dependence Calibration in Conditional Copulas: A Nonparametric Approach Elif F. Acar, Radu V. Craiu, and Fang Yao Web Appendix A: Technical Details The score and

More information

Locally Robust Semiparametric Estimation

Locally Robust Semiparametric Estimation Locally Robust Semiparametric Estimation Victor Chernozhukov Juan Carlos Escanciano Hidehiko Ichimura Whitney K. Newey The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Output Analysis for Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Output Analysis

More information

Preface. 1 Nonparametric Density Estimation and Testing. 1.1 Introduction. 1.2 Univariate Density Estimation

Preface. 1 Nonparametric Density Estimation and Testing. 1.1 Introduction. 1.2 Univariate Density Estimation Preface Nonparametric econometrics has become one of the most important sub-fields in modern econometrics. The primary goal of this lecture note is to introduce various nonparametric and semiparametric

More information

Improving Equity Premium Forecasts by Incorporating Structural. Break Uncertainty

Improving Equity Premium Forecasts by Incorporating Structural. Break Uncertainty Improving Equity Premium Forecasts by Incorporating Structural Break Uncertainty b, c, 1 Jing Tian a, Qing Zhou a University of Tasmania, Hobart, Australia b UQ Business School, The University of Queensland,

More information

Introduction to Algorithmic Trading Strategies Lecture 10

Introduction to Algorithmic Trading Strategies Lecture 10 Introduction to Algorithmic Trading Strategies Lecture 10 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Financial Econometrics

Financial Econometrics Financial Econometrics Nonlinear time series analysis Gerald P. Dwyer Trinity College, Dublin January 2016 Outline 1 Nonlinearity Does nonlinearity matter? Nonlinear models Tests for nonlinearity Forecasting

More information

On the Behavior of Marginal and Conditional Akaike Information Criteria in Linear Mixed Models

On the Behavior of Marginal and Conditional Akaike Information Criteria in Linear Mixed Models On the Behavior of Marginal and Conditional Akaike Information Criteria in Linear Mixed Models Thomas Kneib Department of Mathematics Carl von Ossietzky University Oldenburg Sonja Greven Department of

More information

A Semiparametric Generalized Ridge Estimator and Link with Model Averaging

A Semiparametric Generalized Ridge Estimator and Link with Model Averaging A Semiparametric Generalized Ridge Estimator and Link with Model Averaging Aman Ullah, Alan T.K. Wan y, Huansha Wang z, Xinyu hang x, Guohua ou { February 4, 203 Abstract In recent years, the suggestion

More information

Predictive Regressions: A Reduced-Bias. Estimation Method

Predictive Regressions: A Reduced-Bias. Estimation Method Predictive Regressions: A Reduced-Bias Estimation Method Yakov Amihud 1 Clifford M. Hurvich 2 May 4, 2004 1 Ira Leon Rennert Professor of Finance, Stern School of Business, New York University, New York

More information

Nonparametric Regression

Nonparametric Regression Nonparametric Regression Econ 674 Purdue University April 8, 2009 Justin L. Tobias (Purdue) Nonparametric Regression April 8, 2009 1 / 31 Consider the univariate nonparametric regression model: where y

More information

Optimizing forecasts for inflation and interest rates by time-series model averaging

Optimizing forecasts for inflation and interest rates by time-series model averaging Optimizing forecasts for inflation and interest rates by time-series model averaging Presented at the ISF 2008, Nice 1 Introduction 2 The rival prediction models 3 Prediction horse race 4 Parametric bootstrap

More information

Panel Threshold Regression Models with Endogenous Threshold Variables

Panel Threshold Regression Models with Endogenous Threshold Variables Panel Threshold Regression Models with Endogenous Threshold Variables Chien-Ho Wang National Taipei University Eric S. Lin National Tsing Hua University This Version: June 29, 2010 Abstract This paper

More information

Chapter 1. GMM: Basic Concepts

Chapter 1. GMM: Basic Concepts Chapter 1. GMM: Basic Concepts Contents 1 Motivating Examples 1 1.1 Instrumental variable estimator....................... 1 1.2 Estimating parameters in monetary policy rules.............. 2 1.3 Estimating

More information

Jackknife Model Averaging for Quantile Regressions

Jackknife Model Averaging for Quantile Regressions Singapore Management University Institutional Knowledge at Singapore Management University Research Collection School Of Economics School of Economics -3 Jackknife Model Averaging for Quantile Regressions

More information

Single Index Quantile Regression for Heteroscedastic Data

Single Index Quantile Regression for Heteroscedastic Data Single Index Quantile Regression for Heteroscedastic Data E. Christou M. G. Akritas Department of Statistics The Pennsylvania State University SMAC, November 6, 2015 E. Christou, M. G. Akritas (PSU) SIQR

More information

Local Polynomial Regression

Local Polynomial Regression VI Local Polynomial Regression (1) Global polynomial regression We observe random pairs (X 1, Y 1 ),, (X n, Y n ) where (X 1, Y 1 ),, (X n, Y n ) iid (X, Y ). We want to estimate m(x) = E(Y X = x) based

More information

Multi-Step Non- and Semi-Parametric Predictive Regressions for Short and Long Horizon Stock Return Prediction

Multi-Step Non- and Semi-Parametric Predictive Regressions for Short and Long Horizon Stock Return Prediction ISSN 1440-771X Department of Econometrics and Business Statistics http://business.monash.edu/econometrics-and-businessstatistics/research/publications Multi-Step Non- and Semi-Parametric Predictive Regressions

More information

The Simple Regression Model. Part II. The Simple Regression Model

The Simple Regression Model. Part II. The Simple Regression Model Part II The Simple Regression Model As of Sep 22, 2015 Definition 1 The Simple Regression Model Definition Estimation of the model, OLS OLS Statistics Algebraic properties Goodness-of-Fit, the R-square

More information

Lecture 3: Statistical Decision Theory (Part II)

Lecture 3: Statistical Decision Theory (Part II) Lecture 3: Statistical Decision Theory (Part II) Hao Helen Zhang Hao Helen Zhang Lecture 3: Statistical Decision Theory (Part II) 1 / 27 Outline of This Note Part I: Statistics Decision Theory (Classical

More information

Lecture 13. Simple Linear Regression

Lecture 13. Simple Linear Regression 1 / 27 Lecture 13 Simple Linear Regression October 28, 2010 2 / 27 Lesson Plan 1. Ordinary Least Squares 2. Interpretation 3 / 27 Motivation Suppose we want to approximate the value of Y with a linear

More information

Comprehensive Examination Quantitative Methods Spring, 2018

Comprehensive Examination Quantitative Methods Spring, 2018 Comprehensive Examination Quantitative Methods Spring, 2018 Instruction: This exam consists of three parts. You are required to answer all the questions in all the parts. 1 Grading policy: 1. Each part

More information

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series Willa W. Chen Rohit S. Deo July 6, 009 Abstract. The restricted likelihood ratio test, RLRT, for the autoregressive coefficient

More information

Least Squares Model Averaging. Bruce E. Hansen University of Wisconsin. January 2006 Revised: August 2006

Least Squares Model Averaging. Bruce E. Hansen University of Wisconsin. January 2006 Revised: August 2006 Least Squares Model Averaging Bruce E. Hansen University of Wisconsin January 2006 Revised: August 2006 Introduction This paper developes a model averaging estimator for linear regression. Model averaging

More information

Bagging and Forecasting in Nonlinear Dynamic Models

Bagging and Forecasting in Nonlinear Dynamic Models DBJ Discussion Paper Series, No.0905 Bagging and Forecasting in Nonlinear Dynamic Models Mari Sakudo (Research Institute of Capital Formation, Development Bank of Japan, and Department of Economics, Sophia

More information

Finite Sample Performance of Semiparametric Binary Choice Estimators

Finite Sample Performance of Semiparametric Binary Choice Estimators University of Colorado, Boulder CU Scholar Undergraduate Honors Theses Honors Program Spring 2012 Finite Sample Performance of Semiparametric Binary Choice Estimators Sean Grover University of Colorado

More information

Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model

Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model 1. Introduction Varying-coefficient partially linear model (Zhang, Lee, and Song, 2002; Xia, Zhang, and Tong, 2004;

More information

Vanishing Predictability and Non-Stationary. Regressors

Vanishing Predictability and Non-Stationary. Regressors Vanishing Predictability and Non-Stationary Regressors Tamás Kiss June 30, 2017 For helpful suggestions I thank Ádám Faragó, Erik Hjalmarsson, Ron Kaniel, Riccardo Sabbatucci (discussant), arcin Zamojski,

More information

DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA

DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA Statistica Sinica 18(2008), 515-534 DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA Kani Chen 1, Jianqing Fan 2 and Zhezhen Jin 3 1 Hong Kong University of Science and Technology,

More information

Comparing Nested Predictive Regression Models with Persistent Predictors

Comparing Nested Predictive Regression Models with Persistent Predictors Comparing Nested Predictive Regression Models with Persistent Predictors Yan Ge y and ae-hwy Lee z November 29, 24 Abstract his paper is an extension of Clark and McCracken (CM 2, 25, 29) and Clark and

More information

On the Behavior of Marginal and Conditional Akaike Information Criteria in Linear Mixed Models

On the Behavior of Marginal and Conditional Akaike Information Criteria in Linear Mixed Models On the Behavior of Marginal and Conditional Akaike Information Criteria in Linear Mixed Models Thomas Kneib Institute of Statistics and Econometrics Georg-August-University Göttingen Department of Statistics

More information

Asymptotically Optimal Regression Trees

Asymptotically Optimal Regression Trees Working Paper 208:2 Department of Economics School of Economics and Management Asymptotically Optimal Regression Trees Erik Mohlin May 208 Asymptotically Optimal Regression Trees Erik Mohlin Lund University

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

The Prediction of Monthly Inflation Rate in Romania 1

The Prediction of Monthly Inflation Rate in Romania 1 Economic Insights Trends and Challenges Vol.III (LXVI) No. 2/2014 75-84 The Prediction of Monthly Inflation Rate in Romania 1 Mihaela Simionescu Institute for Economic Forecasting of the Romanian Academy,

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Financial Econometrics Return Predictability

Financial Econometrics Return Predictability Financial Econometrics Return Predictability Eric Zivot March 30, 2011 Lecture Outline Market Efficiency The Forms of the Random Walk Hypothesis Testing the Random Walk Hypothesis Reading FMUND, chapter

More information

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented

More information

Is there a flight to quality due to inflation uncertainty?

Is there a flight to quality due to inflation uncertainty? MPRA Munich Personal RePEc Archive Is there a flight to quality due to inflation uncertainty? Bulent Guler and Umit Ozlale Bilkent University, Bilkent University 18. August 2004 Online at http://mpra.ub.uni-muenchen.de/7929/

More information

Modification and Improvement of Empirical Likelihood for Missing Response Problem

Modification and Improvement of Empirical Likelihood for Missing Response Problem UW Biostatistics Working Paper Series 12-30-2010 Modification and Improvement of Empirical Likelihood for Missing Response Problem Kwun Chuen Gary Chan University of Washington - Seattle Campus, kcgchan@u.washington.edu

More information

Stock Return Predictability Using Dynamic Mixture. Model Averaging

Stock Return Predictability Using Dynamic Mixture. Model Averaging Stock Return Predictability Using Dynamic Mixture Model Averaging Joseph P. Byrne Rong Fu * October 5, 2016 Abstract We evaluate stock return predictability by constructing Dynamic Mixture Model Averaging

More information

ECON 3150/4150, Spring term Lecture 6

ECON 3150/4150, Spring term Lecture 6 ECON 3150/4150, Spring term 2013. Lecture 6 Review of theoretical statistics for econometric modelling (II) Ragnar Nymoen University of Oslo 31 January 2013 1 / 25 References to Lecture 3 and 6 Lecture

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Jackknife Model Averaging for Quantile Regressions

Jackknife Model Averaging for Quantile Regressions Jackknife Model Averaging for Quantile Regressions Xun Lu and Liangjun Su Department of Economics, Hong Kong University of Science & Technology School of Economics, Singapore Management University, Singapore

More information

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 Last week... supervised and unsupervised methods need adaptive

More information