Bias in the Mean Reversion Estimator in the Continuous Time Gaussian and Lévy Processes
|
|
- Nathan Lucas
- 5 years ago
- Views:
Transcription
1 Bias in the Mean Reversion Estimator in the Continuous Time Gaussian and Lévy Processes Aman Ullah a, Yun Wang a, Jun Yu b a Department of Economics, University of California, Riverside CA b School of Economics and Sim Kee Boon Institute for Financial Economics Singapore Management University, 7893, Singapore July Abstract This paper considers the bias of the mean reversion estimator (b) in the continuous time Lévy processes. Although an extensive literature has developed methods for estimating the parameters in continuous time di usion models and for approximating the estimation bias, the e ect of nonnormality on the estimation has not been studied. The bias of b is approximated and two bias expressions are obtained for the Lévy-based Ornstein-Uhlenbeck (OU) process. The approximate bias of b under normality is also derived as a special case. The two bias expressions indicate that both the skewness and kurtosis of the Lévy measure a ect the bias when the time span and the sample size is not very large. The initial condition, the long term mean (), and the volatility parameter () also enter the bias expressions. A bias corrected estimator of is proposed. Monte Carlo studies are conducted to compare four di erent estimators of. Simulation results suggest that our proposed estimator of outperforms other bias corrected estimators proposed in the literature. Keywords: Bias, Continuous Time Models, Lévy Process JEL Classi cation: C3, C Address Correspondence to: Aman Ullah, Department of Economics, University of California, Riverside, CA 95-47; aman.ullah@ucr.edu. Yun Wang, Department of Economics, University of California, Riverside, CA 95-47; yun.wang3@ .ucr.edu. Jun Yu, School of Economics, Sinagpore Management University, 9 Stamford Road, Singapore 7893; yujun@smu.edu.sg.
2 Introduction In recent years, an extensive literature has developed on using di usion processes to model the dynamic behavior of nancial securities. For example, Vasicek (977) used the following the Ornstein- Uhlenbeck (OU) process to model the spot interest rate, dx t = ( X t )dt + db t (.) where B t is a standard Brownian motion. This is a Gaussian Markov process and posesses a stationary distribution when >. In this case, is the converge rate to pull the process towards its long term mean. Tang and Chen (9) considered a more general form of a Brownian motion based continuous time model (i.e. di usion process) dx t = ( X t )dt + (X t ; )db t (.) where (X t ; ) is the di usion function of X t at time t. The Vasicek (977) model is a special case of this di usion process, which has the constant di usion function (X t ; ) =. If (X t ; ) = p X t, the di usion process becomes the CIR model (Cox, Ingersoll, and Ross, 985). A even more general di usion process is given by dx t = (X t ; )dt + (X t ; )db t ; (.3) with a general drift function (X t ; ). An important special case is when (X t ; ) = X t and (X t ; ) = X t. Black and Scholes (973) used it to model the spot price of a stock. All these processes are Brownian-motion based. Under some smoothness condition on the the drift function and the di usion function, the sample path generated from X t is continuous everywhere. In recent years, however, it has been reported strong evidence of in nite activity jumps in nancial variables; see, for example, Jacod and Aït-Sahalia (8). Not surprisingly, continuous time Lévy processes have become increasingly popular, and various Lévy models have been developed in asset pricing literature (see for example, Barndor -Nielsen (998), Madan, Carr and Chang (998), Carr and Wu (3)). In practice, one can only obtain the observations at discrete time points from a nite time span, i.e. T (< ) is the time span, h (> ) the sampling interval, and n (= T=h; ) the number of observations. Based on discrete time observations, di erent methods have been used to estimate the continuous time models. Phillips and Yu (9) provide an overview of some widely used estimation methods. When the drift function is linear and slowly mean reverting, it is found that there is serious estimation bias in the mean reversion parameter () by almost all the methods. Because this parameter is of important implications for asset pricing, risk management and forecasting, how to accurately estimate this parameter has received considerable attention in the literature. For example, Yu (9) approaximates the bias of MLE of when the long run mean is known and the initial condition is the marginal distribution under the Gaussion OU process. Tang and Chen (9)
3 approaximates the bias of MLE of when the long run mean is unknown under the Gaussion OU process and the CIR model. To reduce the estimation bias of, Phillips and Yu (5) proposed the jackknife method. While jackknife increases the variance, a carefully designed jackknife procedure can o er substain improvement in reducing the bias, leading to a decrease in the root mean square errors (RMSE). To further reduce RMSE Phillips and Yu (9) proposed the indirect inference method and Tang and Chen (9) proposed a parametric bootstrap method. The latter methods are simulation-based and hence numerically more demanding. The di culty in estimation is not unexpected because it is related to the nite sample bias problem well documented in the discrete time literature; see, for example, Kendall (954). However, the magnitude of the bias in is so large in practially relevant case that the implications for the bias become very important. For example, Phillips and Yu (5) show that the bias of maximum likelihood estimator for in the CIR model can be over % even though 5 years of data were used (regardless the sample frequency). They further report evidence that the bias in the drift term estimation are even worse than that caused by a misspeci cation of the di usion function and that caused by the discretization of the model using the crudest method, such as the Euler scheme. The simulation results of Phillips and Yu (5) and Tang and Chen (9) show that the bias of the long run mean () and parameters in the di usion function are virtually zero. For instance, in the stationary Vasicek model, as Tang and Chen (9) reported, the bias of b is up to O(T ), while the bias of the di usion parameter and the long run mean parameter are O(n ) and O(n ); respectively. While the bias in b is well studied in the continuous time di usion process, to the best of our knowledge, nothing has been reported on the bias in b in the continuous time Lévy process. The objective of this paper is to approximate the bias of ^ under the Lévy measure, then study the e ects of nonnormality on the estimation bias. Quasi maximum likelihood (QML)/OLS is used to estimate which makes it feasible the analytical expression for b: We present the results on the bias under the assumption where the errors term follow a non-normal distribution with nite rst eight moments. It is found that the kurtosis has negative e ect on the bias of the mean reversion estimator ^: The skewness has positive e ect on the bias of ^ if the distribution has negative skewness. Otherwise, the e ect of skewness on the bias of ^ is negatvie. Both skewness and kurtosis do not a ect the bias as!, or h! : In addition, under the Gaussian OU process the inital condition has non-monotonic e ect on the bias of ^, and the bias of ^ is a monotonically increasing function of the di usion parameter. A bias corrected estimator of the mean reversion estimator ^ is proposed. The simulation results show that our proposed estimator generally performs well in terms of bias and mean square error (MSE), especially, when is small. Small values of correspond to the near unit root situation and is empirically relevant for nancial variables such as interest rates and volatility. The structure of this paper is as follows. In section, we introduce a continuous time Lévy process and derive the bias in the estimation of the mean reversion parameter. Section 3 reports the 3
4 simulation results. Section 4 is conclusions. Parameter Estimation for Lévy Processes. Continuous Time Lévy Process As argued before, while the di usion processes are very useful, empirical evidence points to the need to incorprate jumps with ini nite activity. In this paper we extend the Gaussian OU model of Vasicek to a Lévy-based OU model: dx(t) = ( X(t))dt + dl(t) (.) where (L(t)) t is a Lévy process de ned on (; F; ffg; P ) with L() = and satis es the following three properties:. Independent increments: for every increasing sequence of times t ; : : : ; t n the random variables X t ; X t X t ; : : : ; X tn X tn are independent;. Stationary increment: the law of X t+h X t is independent of t; 3. Stochastic continuity: for all " >, lim h! P (jx t+h X t j ") =. For a given t, the probability of seeing a jump at t is zero. In other words, jumps happen at random times. Obvisouly, the Brownian motion is a special case of the Lévy process. As a result, the Vasicek model is a special case of Model (.). Other well known examples include the Poisson process, the gamma process, the variance gamma process, and the -stable process. While the Brownian motion has a continuous sample path, it does not allow for any jumps. The Poisson process allows for jumps. However, the jump is of nite activity. General Lévy processes allow an in nite number of jumps within any time interval. Also, general Lévy processes allow non-normal increments. The exact discrete time model of (.) is given by r exp( h) X ih = exp( h)x (i )h + " i (.) where the distribution of " i depends on the speci cation of the Lévy measure L(t). This is a discrete time AR() model with a possibly non-normal error term. When L(t) is the Brownion motion, " i N(; ). If L(t) is the variance gamma process of Madan and Seneta (99) (i.e. L(t) = B((t; ; )) where (t; ; ) is a gamma distribution with mean and variance ), then " i follows the variance gamma distribition whose density and the moment generate function are given, respectively, by f(x) = Z e g= p e x =(g) g= dg (.3) g (=)= 4
5 and mgf(u) = u = = (.4) where is the gamma function. The variance gamma distribution is a normal conditional on a variance that is distributed as a gamma variate whose mean is and variance is. It is known that moments of all orders exist with the mean, the variance, and the kurtosis 3+3. Since the excess kurtosis is determined by the parameter, it measures the degree of tail thickness. When L(t) is the Brownian motion the initial condition X() = x : the exact discrete time model is r e X ih = X (i )h + ( e h h ) + " i ; " i N(; ); X = x (.5) where = e h. As! ; the above discrete time AR() process will have a unit root in the limit. To simplify notation, we write X ih as X i. Equation (.5) indicates that the transition densition is X i j X i N X i e h + ( e h ); ( e h ) : (.6) Since the conditional distribution is known, it is easy to obtain the maximum likelihood estimator (MLE) or ordinary least square (OLS) estimator of where ^ = n b = P n i= XiXi n P n P n i= Xi i= Xi n P n i= X i n ( P n i= Xi ) ln ^ h By taking a Talyer Expansion to the second order, we obtain (.7) b = b = E(b) = = ln h h ( ^ h ( ^ ) + h ( ^ ) + h ( ^ ) + o(t ) ) + o(t ) h E(^ ) + h E(^ ) + o(t ) h E(^ ) + h (V ar(^) + (E(^ )) ) + o(t ) (.8) For general Lévy processes, the transition density is not normal any more. As a result, ^ and hence b is not a ML estimate. However, ^ is a quasi maximum likelihood estimate (QMLE) and can be obtained by OLS. So is b. While QMLE/OLS is not as e cient as MLE, it is analytically more tractable than MLE. To approximate the bias of b; we follow Bao and Ullah (9) and make the some assumptions about " i. In particular, we assume " i is i:i:d and " i follows a distribution with eight moments: m = ; m = ; m 3 = ; m 4 = + 3 (.9) m 5 = 3 + ; m 6 = m 7 = m 8 =
6 where and are the Pearson s measures of skewness and kurtosis of the distribution and ; : : : ; 6 can be regarded as measures for deviation from normality. For a normal distribution, ; : : : ; 6 all equals :. Bias approximation when the long run mean is known Now assume that = and it is known, the exact discrete time model of the Lévy process can be written as r e h X ih = X (i )h + " i (.) Bao and Ullah (7, 9) and Bao (7) give the approximate bias and MSE of the OLS estimator for the AR() model without intercept and without assuming normally distributed error terms: B(^) = MSE(^) = n n + o(n ) + n 4 ( )x 4r( ) 3 r ( ) + o(n ); where x is xed. In normal case " t ~iidn(; );by using above results, we have, for xed x E(b ) = h ( n ) + h n 4 = T + T + nt = eh + 3 T + nt when! ; E(b ) T + 3 T n when h! (n! ); E(b ) T : + 4 x n + o(t ) x + o(t ) 4 e h e h x + o(t ) If we consider the nonnormality, i.e., " t ~iid(; ); the skewness and excess kurtosis coe cients matter for the approximate MSE up to O(T be obtained as ):Therefore, the formula of the bias, for xed x ; can E(b ) = h ( n ) + h f 4r ( ) n + n [4 3 r ( )]g + o(t ) x = eh T nt [4 e h x eh 4re h ( e h )( + e h ) e h + e h + r (e h )] + o(t ) when! ; E(b ) T + 3 T n when h! (n! ); E(b ) T : 6
7 We summarize the above results in Theorem.. Theorem. Under Model (.) with a known mean, nonnormal error term with moments given in (.9), and xed x, the approximation to the Bias of the Mean Reversion Estimator is given as follows: E[b j x ] = eh T nt [4 e h x eh 4re h ( e h )( + e h ) e h + e h r (e h )] + o(t ) (.) + Furthermore, when! When h! (n! ) E[b j x ] T + 3 T n (.) E[b j x ] T : (.3) Corollary. Under the Lévy process model (.) with a known mean, nonnormal error term with moments given in (.9), and random nonnormal x with mean and varinace = (), the approximation to the Bias of the Mean Reversion Estimator as follows: E(b ) = eh T nt [4 4re h ( e h )( + e h ) eh e h + e h + r (e h )] + o(t ) (.4) Furthermore, when! When h! (n! ) E(b ) T + 6 T n (.5) E(b ) T : (.6) In Theorem. the result on Bias(b) is obtained conditional on x ;that is E[b j x ]:When x is assumed to be random with mean and varinace = (), unconditionally E(b ) = E x [E(b ) j x ]:The result in the Corollary. then follows. Corollary.3 Under the Lévy process Model (.) with a known mean, normal error term (r = ; and r = ); and xed x, the approximation to the Bias of the Mean Reversion Estimator is given as follows: Furthermore, when! When h! (n! ) E[b j x ] = eh + 3 T + 4 e h e h x nt + o(t ) (.7) E[b j x ] T + 3 T n (.8) E[b j x ] T : (.9) 7
8 Corollary.4 Under the Lévy process model (.) with a known mean, normal error term, and random normal x with mean and varinace = (), the approximation to the Bias of the Mean Reversion Estimator as follows: E(b ) = eh + 3 T + nt [7 eh ] + o(t ) (.) Furthermore, when! When h! (n! ) E(b ) T + 6 T n (.) E(b ) T : (.) Remark.. Here we consider the bias of AR() coe cient up to O(n ) and MSE of AR() coe cient up to O(n ) to obtain our new results in Theorem. and Corollary. for nonnormality. Corollary.3 and Corollary.4 give the result under normality. In Theorem. and Corollary.3 the results on Bias(b) is obtained conditional on x : In Corollary. and Corollary.4 the results on Bias(b) is obtained unconditional on x : Yu (9) derives result for the case of normality and x N(; = ()). His result is E(b ) = eh + 3 T ( e nh ) T n( e h ) (.3) where the rst term on the right handside is the same as the rst term on the right handside in (.), but the second term in his result is di erent from that in (.). In addition, we consider both OU process with a known mean and with an unknown mean, however, Yu (9) only discusses OU process with a known mean. In addition, an important di erence between (.3) and (.) is that the former goes to as goes to but not the latter. h i Remark.. The second term nt 4 e h e h x in (.7) incorporates the intial condition x ;which implies that the the intial condition will a ect the bias of the mean reversion estimator. Notice that if x > < ; which implies that the bias is a decreasing function of the start data point when x > ; if x < > : Remark..3 Result (.9) gives the bias of the mean reversion estimator when! (! ) near unit root case. The asymptotic result given in (.8) which considers h! (n! );i.e., very high data frequency, shows that the bias of the mean reversion estimator only depends on the bias of the coe cient of the corresponding AR() model as h!. Remark..4 Result (.) shows that, the initial condition x, skewness and excess kurtosis all a ect the bias of b. We note < ; which imply that the bias is a monotonically decreasing function of the excess kurtosis. If x > < ; 8
9 which implies that the bias is a decreasing function of the start data point when x > ; if x < > : If r > < ; if r < > :In > implies that the bias is a monotonically increasing function of the variance of error terms : Remark..5 (.3) is the special case of (.) with r = ; and r = : Comparing both cases with normality and without normality, when!, or h! ; we get same bias. In the case of near unit root or very high frequency data and nonnormality, the mean reversion and start data point do not a ect the bias of b: Remark..6 (.4) considers random x and nonnormality. (.) is the special case of (.4) with r = ; and r = :Comparing both cases with normality and without normality, when!, or h! ; we get same bias. That is, in the cases of near unit root or very high frequency data, nonnormality does not a ect the bias of : Remark..7 Comparing (.) and (.5), as! ; the bias under the process with an unknown mean is bigger than that under the process with a known mean. Comparing (.3) and (.6), as h! ; the bias in model (.5) is twice as high as the one in model (.). Both theorems represent that the bias doesn t disappear when! or h! (n! ) unless T! :.3 Bias approximation when the long run mean is unknown For the discrete time AR() model with an unknown intercept, the second-order bias up to O(n ) of the OLS estimator ^; is B(^) = +3 n given by Bao and Ullah (7):The MSE up to O(n ) given by Bao and Ullah (9) is as follows M(^) = n + n [3 + 4r ( ) + 3 r ( )] + o(n ) ( )x where x is the initial condition, r is the skewness, and r is the excess kurtosis. In a special case when the error term is normal, r = r = and M(^) = n + n "3 + + # ( )x + o(n ): Substituting above results into (), the bias of the mean reversion estimator in the normality case 9
10 is where = ( E(b) = h ( ) + n h f + n n [3 + ( )x g + o(t ) = + 3 T + T + T n f3 + ( )x + q e h e ); = h E(b) = eh + 3 T + eh T = T (eh + e h + 5) + T n g + o(t ) : So we can rewrite the bias in terms of ; h, and : e h e h ( x ) nt + o(t ) 3 + e h e h ( x ) + o(t ): If we consider the nonnormality, i.e., " t ~iid(; ); the skewness and excess kurtosis coe cients matter for the approximate MSE up to O(T b, for xed x ; as follows E(b) = + 3 T + T + T n f3 + 4r ( ) ):Therefore, we can obtain the formula of the bias of 3 r ( )g + o(t ) + ( )x = T (eh + e h + 5) + T n f3 + e h ( x ) eh 4re h ( e h )( + e h ) + e h + e h r (e h )g + o(t ) The result of estimation bias of the mean reversion parameter for Lévy process with unknow mean are given in Theorem., which considers the error term with a nonnormal distribution. Theorem.5 Under the Lévy process model (.5) with an unknown mean, nonnormal error term with moments given in (.9), and the initial condition x, the approximation to the bias of b can be written as follows: E[b j x ] = T (eh + e h + 5) + T n f3 + e h ( x ) eh 4re h ( e h )( + e h ) + e h + e h r (e h )g + o(t ) (.4) Furthermore, when! When h! (n! ) E[b j x ] 4 T + 33 T n (.5) E[b j x ] 4 T : (.6)
11 Corollary.6 Under the Lévy process model (.5) with an unknown mean, a nonnormal error term with moments given in (.9), and a random initial condition whose mean is and varinace =(), the approximation to the bias of b is as follows: E(b ) = T (eh + e h + 5) + f3 + eh e h T n 4re h ( e h )( + e h ) + e h + e h r (e h )g + o(t ) (.7) Furthermore, when! When h! (n! ) E(b ) 4 T + 6 T n (.8) E(b ) 4 T : (.9) In Theorem.5 the result on Bias(b) is obtained conditional on x ;that is E[b j x ]:When x is assumed to be random with mean and varinace = (), unconditionally E(b ) = E x [E(b ) j x ]:The result in the Corollary.6 then follows. Corollary.7 Under Lévy process model (.5) with an unknown mean, normal error term (r = ; and r = ), and xed x, the approximation to the bias of b is as follows: E[b j x ] = T (eh + e h + 5) + T n 3 + e h e h ( x ) + o(t ) (.3) Furthermore, when! When h! (n! ) E[b j x ] 4 T + 33 T n (.3) E[b j x ] 4 T : (.3) Corollary.8 Under the Lévy process model (.5) with an unknown mean, normal error term (r = ; and r = ), and random normal x with mean and varinace =, the approximation to the Bias of the Mean Reversion Estimator as follows: E(b ) = T (eh + e h + 5) + T n (3 + eh e h ) + o(t ) (.33) Furthermore, when! When h! (n! ) E(b ) 4 T + 6 T n (.34) E(b ) 4 T : (.35)
12 Remark.. We consider the bias of AR() coe cient up to O(n ); and MSE of AR() coe cient up to O(n ) to obtain our new result in Theorem.5 and Corollary.6 under nonnormality, and Corollary.7 and Corollary.8 under normality. In Theorem.5 and Corollary.7 the results on Bias(b) are obtained conditional on x : In Corollary.6 and Corollary.8 the results on Bias(b) are unconditional. Under normality and random x, Tang and Chen (9) consider both the bias and MSE of AR() coe cient up to O(n ) only: Therefore, their bias result for b is E(b) = T (eh + e h + 5) (.36) which is the rst term in the left handside of (.33). Therefore, our result in Theorem. under Lévy-based OU process with an unknown mean, provides an improved approximation of bias and di ers from Tang and Chen (9). In addition, our paper also derives the results (Theorem. and Corollary.-.4) under Lévy-based OU process with a known mean which are not discussed by Tang and Chen (9). Remark.. We note that the second term T n (3+eh e h ( x ) ) in (.3) incorporates both the mean parameter and the data start point x : If x is xed and > x ; < ; which implies that the higher lowers the bias. If x is xed and < x ; > ; that is, higher gives higher bias. We also notice that the bias is not a monotonic function of starting data point, if > x > ; < : When T n is very large, the e ects of and x on bias are negligible. However, under a special case of stationary distribution, x can be replaced by its mean.in that case x is zero and the bias term becomes free from and : Remark..3 Result (.3) gives the bias of the mean reversion estimator when b! (! ) near unit root case. The result given in (.3) which considers h! (n! );i.e., very high data frequency, shows that the bias of the mean reversion estimator only depends on the bias of the coe cient of the corresponding AR() model, since the rst term actually arises from the bias of ^: Remark..4 Result (.4) shows that not only the mean and the data start point x a ect the bias, but also skewness and excess kurtosis a ect it as well. We note < ; which imply the bias is the monotonically decreasing function of the excess kurtosis. skewness, if r < ; if r > : Remark..5 Corollary.6 is the special case of Theorem.5 with r = ; and r = : Comparing both cases with normality and without normality, when!, or h! ; we get same bias. That is, in the cases of near unit root or very high frequency data, nonnormality, and start data point do not a ect the bias of b: For
13 Remark..6 Corollary.6 considers random x and nonnormality. Corollary.8 is the special case of Corollary.6 with r = ; and r = :Both the results in corollaries.6 and.8 are free from x and : Comparing both cases with normality and without normality, when!, or h! ; we get same bias. That is, in the cases of near unit root or very high frequency data, nonnormality does not a ect the bias of b: Remark..7 Theorem.5 also shows that the bias depends on the true value of the mean reversion parameter. When! or h! (n! ); the bias doesn t disappear unless T!, which is consistent with studies that bias still exists for the large sample size. 3 Bias Approximations with Higher Order Bias and MSE This section shows the bias approximation by considering both the Bias and MSE of AR() coe cient up to O(=n ): Bao (7) gave the approximate bias and MSE of the OLS estimator for the AR() model without intercept and with general error term as follows: B(^) = n + n 4 + x MSE(^) = n + n r ( + )x r + r ( )( ) ( )x 4r ( ) 3 r ( ) + o(n ); + o(n ) where x is xed. Also, he gave the approximate bias and MSE of the OLS estimator for the AR() model with intercept and with general error term as follows: " + 3 B(^) = n n ( ) +o(n ) MSE(^) = n +o(n ) + n "3 + + ( )x # + 4r ( ) 3 + r # ( )x 4r( ) 3 r ( ) where x is xed. Along the line of Bao (7), we obtain the bias approximations of LS estimator of the mean reversion parameter for both known mean and unknown mean Lévy processes, which are presented in the following theorems and corollaries. Theorem 3. Under Model (.) with a known mean, nonnormal error term with moments given in (.9), and xed x, the approximation to the Bias of the Mean Reversion Estimator is given as follows: E[b j x ] = eh + 3 T + nt [6 e h x (e h + 3) eh (e h ) 4r (3 + e h + 3e h + e h ) e h + e h (eh + )x q e h k r (e h + 3)] + o((nt ) ) (3.)
14 Furthermore, when! E[b j x ] T + T n [5 4x h + 4x p h 4 3 4r ] (3.) When h! (n! ) E[b j x ] T x T : (3.3) Corollary 3. Under the Lévy process model (.) with a known mean, nonnormal error term with moments given in (.9), and random nonnormal x with mean and varinace = (), the approximation to the Bias of the Mean Reversion Estimator as follows: E(b ) = eh + 3 T + nt [6 e h (e h + 3) eh (e h ) 4r (3 + e h + 3e h + e h ) e h + e h + r (e h + 3)] + o((nt ) ) (3.4) Furthermore, when! E(b )! (3.5) since eh (e h +3) (e h )! : When h! (n! ) E(b ) T T : (3.6) Theorem 3.3 Under the Lévy process model (.5) with an unknown mean, nonnormal error term with moments given in (.9), and the initial condition x, the approximation to the bias of b can be written as follows: E[b j x ] = T (eh + e h + 5) + T n f3 + eh 6e h + 8 e h (e h ) ( x ) ( + e h + 5e h ) 4r ( e h )( + e h ) + e h ( + e h ) e h ( e h ) e h ( + e h + e h ) r (e h + 3)g + o((nt ) ) (3.7) Furthermore, when! E[b j x ]! (3.8) since eh 6e h +8 e h (e h )! :When h! (n! ) E[b j x ] 4 T + 7 T 4( x ) T : (3.9) 4
15 Corollary 3.4 Under the Lévy process model (.5) with an unknown mean, a nonnormal error term with moments given in (.9), and a random initial condition whose mean is and varinace =(), the approximation to the bias of b is as follows: E(b ) = T (eh + e h + 5) + T n f3 + eh 6e h + 8 e h (e h ) ( + e h + 5e h ) 4r ( e h )( + e h ) + e h ( + e h ) e h ( e h ) e h ( + e h + e h ) r (e h + 3)g + o((nt ) ) (3.) Furthermore, when! E(b )! (3.) since eh 6e h +8 e h (e h ) (+e h +5e h ) e h ( e h )! :When h! (n! ) E(b ) 4 T + 5 T : (3.) Remark 3. Here we consider the bias and MSE of AR() coe cient up to O(=n ) to obtain our new results in Theorem 3. and Corollary 3. for the Lévy process with known mean. The bias approximations for OU process with normally distributed error can be straightforward developed by substituting = ; = into above results. Similar with previous results, the initial data point, variance of the error term, skewness and excess kurtosis enter the higher order bias approximations. Compared with Theorem., the second term in (.) is di erent. With higher order of the bias of AR() coe cient, the esitmaiton bias approximation of ^ has a cross product term of x and : In addition, the approximated estimation bias is nonmonotonical function of the initial value and skewness. The excess kurtosis still has negative e ect on the estimation bias, and its negative impact is larger than the results in Theorem.. Notice that the initial value, skewness and excess kurtosis still have impacts on the limit bias approximation as! : And the variance of the error term and initial value also enter the limit bias approximation in (3.3) as h! (n! ): Remark 3. The second term in Corollary 3. is di erent from the one in Corollary.. Corollary 3. shows that the higher order bias approximation is still a non-monotonical function of the skewness, and the kurtosis of the error term distribution has larger negative e ect on the estimation bias than it is in Corollary.. Considering higher order of bias, we nd that the limit bias approximation explodes as! ; and goes to a smaller constant compared to Corollary. as h! (n! ): Remark 3.3 With the higher order of the bias, the second terms in the results obtained for the Lévy process with unknown mean and xed intial data point in Theorem 3.3 di er from those 5
16 in Theorem.5. The marginal e ects of the long run mean, initial data point, skewness and kurtosis in former are obviously di erent from those in latter. The squared skewness and kurtosis in former has larger negative impact on the estimation bias compared with the latter. In addition, notice that the limit conditional estimation bias explodes as! ; and the limit conditional estimation bias is a function of the long run mean, initial data point and the variance. Remark 3.4 In comparison with Corollary.6, the results of Corollary 3.4 di er in the second term. Corollary 3.4 which represents the bias approximation for the Lévy process with unknown mean and random initial data point, shows higher marginal impact of squared skewness and kurtosis than the former. And as! ; the limit bias approximation in the latter explodes, while the one in the former is constant. In addition, as h! (n! ), the limit bias approximation in Corollary 3.4 is a function of itself and larger than the result in Corollary.6. 4 Simulation Results In this section, we perform Monte Carlo simulations to illustrate the nite sample performance of our bias correction in comparison with OLS and the estimator corrected by Yu (9) and Tang and Chen (9) in terms of mean, relative bias (r. bias %), mean squared error (MSE), and root mean squared error (RMSE). We consider both Lévy Processes with a known mean and with an unknown mean. All simulation results come from, repetitions. 4. Bias Correction for Lévy Process with a known mean under nonnormality Here we consider four estimators for Lévy process with a known mean under nonnormality: OLS, Yu (9) estimator (Yu) corrected by the bias given in Remark.., the estimator (UWY) corrected by the bias corresponding to our Theorem., and the estimator (UWYH) corrected by the bias corresponding to Theorem 3.. Both xed initial value case and random initial value case are considered. In order to obtain non-normal error terms we rst generate the errors from the gamma distribution with mean and variance ; where v = :5; ;respectively, and second make transformation on the generated errors to satisfy the assumption in (.9), then generate the discrete time observations under the model (.). An extensive literature shows that the securities data generally has unit root or near unit root, so usually has small values. Hence, here we consider four small values of = :; :5; :; 3:: And we set up T = 5; and h = =; =5; =5; respectively. For xed x case, we set the start data point equal. For random x case, we generate x from variance gamma distribution. Figure and plot the true bias, the biases according to Yu (9), Theorem. and Theorem 6
17 3. for Levy processes with a known mean. The red line represents the true bias, the black dashed line is Yu bias, the green line is UWY bias based on Theorem., and the blue dashed line shows UWYH bias based on Theorem 3.. For random x case, gure shows that when is smaller than.5 both Yu and UWYH estimator drop below the true bias. When is greater than.5 both Yu and UWYH bias approximations can match the true bias very well. However, the green line shows that UWY bias approximation is a little above the true bias. As goes larger, all three bias approximations are approaching the true bias more closely. For xed x case, gure shows that all bias approximations have very small discrepency from the true bias, especially, for is greater than.5. Yu bias approximation is closest to the true bias for less than.5. While is greater than.5, our bias approximations match better with the true bias than Yu bias does. Bias Bias Figure Levy Process with a Known Mean and Random x kappa Figure Levy Process with a Known Mean and Fixed x kappa Our ndings in this case according to Tables include: rst, Yu almost has the smallest bias and RMSE among all three estimators when = :; second, when is moderately larger ( = :5; :; 3:); UWYH has smallest RMSE, and Yu estimator provides a slightly smaller bias than our estimator does; third, when = 3:, UWYH has both the smallest bias and lowest RMSE. It is not di cult to nd that Yu and UWYH perform very similarly. When = :; Yu performs slightly better than UWYH in the sense of having lower bias and RMSE. When is moderately larger ( = :5; :; 3:); UWYH performs slightly more e cient than Yu in the sense of having lower RMSE. 7
18 Table 4.. Bias Correction for a Known Mean Lévy Process under Fixed x v = :5 O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =. 5 T = 5, h = / 5, =. 5 M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, = 3: T = 5, h = / 5, = 3: T = 5, h = / 5, = 3: M e a n r. b ia s ( % ) M S E R M SE
19 Table 4.. Bias Correction for a Known Mean Lévy Process under Fixed x v = : O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =.5 T = 5, h = / 5, =.5 M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =3. T = 5, h = / 5, =3. T = 5, h = / 5, =3. M e a n r. b ia s ( % ) M S E R M SE
20 Table 4..3 Bias Correction for a Known Mean Lévy Process Random x v = :5 O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =.5 T = 5, h = / 5, =. 5 M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =3. T = 5, h = / 5, =3. T = 5, h = / 5, =3. M e a n r. b ia s ( % ) M S E R M SE
21 Table 4..4 Bias Correction for a Known Mean Lévy Process under Random x v = : O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H O L S Y u U W Y U W Y H T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =. 5 T = 5, h = / 5, =. 5 M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =3. T = 5, h = / 5, =3. T = 5, h = / 5, =3. M e a n r. b ia s ( % ) M S E R M SE Bias Correction for Lévy Process with an unknown mean under nonnormality In this case, we also consider three estimators: OLS, Tang and Chen (9) estimator (TC) corrected by the bias given in Remark.., the estimators (UWY and UWYH) corrected by the biases corresponding to our Theorem.5 and Theorem 3.3, respectively, under Lévy process with an unknown mean under nonnormality. Same as the previous cases, the error term is rst generated from the gamma distribution with mean and variance ; where v = :5; ;respectively. We set = : and = : For xed initial value case, the start data point is xed at : For random initial value case, the start data point is generated from the gamma distribution with mean and variance ; where v = :5; ;respectively. Tables list the simulation results for this case. Figure 3 and 4 plot the true bias, the biases according to Tang and Chen (9), Theorem.5 and Theorem 3.3 for Levy processes with an unknown mean. We set = : and = :: The red
22 Figure 3 Levy Process with an Unknown Mean and random x.5 Bias kappa Figure 4 Levy Process with an Unknown Mean and fixed x.5 Bias kappa line represents the true bias, the black dashed line is TC bias according to Tang and Chen (9), the green line is UWY bias based on Theorem.5, and the blue dashed line shows UWYH bias based on Theorem 3.3. For random x case, all three bias approximations have some distances from the true bias. Among three bias approximations shown in gure 3, UWYH performs the relatively best, and it can also show the curvature as is small. Figure 4 shows the performance of all three bias approximations for Lévy process with an unknown mean and xed x case. When is close to zero, UWYH bias approximation goes up dramatically. When is greater than.5, UWYH has the smallest distance from the true bias among all three bias approximations. Like Lévy process with a known mean, the simulation results under Lévy process with an unknown mean also provides the support that the estimator based on bias correction under nonnormality for both xed x and random x, UWYH is useful in nite samples. The simulations in Table show that UWYH always has the smallest bias and mean squared error with only one exception in the case of = : under which UWY has the smallest bias and MSE than others. These results are in accordance with gures and. In a word, our estimators UWY and UWYH have improvement over OLS and TC. Especially, when = :5; :; 3:; UWYH is the most e cient estimator in the sense of having the smallest bias and the lowest RMSE.
23 Table 4.. Bias Correction for an Unknown Mean Lévy Process under Fixed x v = :5 O L S T C U W Y U W Y H O L S T C U W Y U W Y H O L S T C U W Y U W Y H T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =. 5 T = 5, h = / 5, =. 5 M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =3. T = 5, h = / 5, =3. T = 5, h = / 5, =3. M e a n r. b ia s ( % ) M S E R M SE
24 Table 4.. Bias Correction for an Unknown Mean Lévy Process under Fixed x v = : O L S T C U W Y U W Y H O L S T C U W Y U W Y H O L S T C U W Y U W Y H T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =. 5 T = 5, h = / 5, =.5 M e a n r. b ia s M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s M S E R M SE T = 5, h = /, =3. T = 5, h = / 5, =3. T = 5, h = / 5, =3. M e a n r. b ia s M S E R M SE
25 Table 4..3 Bias Correction for an Unknown Mean Lévy Process under Random x v = :5 O L S T C U W Y U W Y H O L S T C U W Y U W Y H O L S T C U W Y U W Y H T = 5, h = /, = : T = 5, h = / 5, = : T = 5, h = / 5, = : M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =. 5 T = 5, h = / 5, =. 5 M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =3. T = 5, h = / 5, =3. T = 5, h = / 5, =3. M e a n r. b ia s ( % ) M S E R M SE
26 Table 4..4 Bias Correction for an Unknown Mean Lévy Process under Random x v = : O L S T C U W Y U W Y H O L S T C U W Y U W Y H O L S T C U W Y U W Y H T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =.5 T = 5, h = / 5, =. 5 T = 5, h = / 5, =. 5 M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =. T = 5, h = / 5, =. T = 5, h = / 5, =. M e a n r. b ia s ( % ) M S E R M SE T = 5, h = /, =3. T = 5, h = / 5, =3. T = 5, h = / 5, =3. M e a n r. b ia s ( % ) M S E R M SE To sum up, our ndings are as follow: (i) For Lévy processes with a unknown mean, regardless of whether x is xed or random, under non-normality the MSE/RMSE of UWY is always smaller than the MSE/RMSE of TC estimator. And UWYH which considers higher order bias of AR() coe cient has the smallest bias and MSE/RMSE when = :5; :; 3:: (ii) For Lévy processes with a known mean, Yu and UWYH perform similarly. When = :; Yu has slightly smaller bias and MSE/RMSE than UWYH. However, when = :5; :; 3:, UWYH performs slightly better than Yu in the sense of having a little lower MSE. (iv) Figures -4 show that UWYH bias approximation has large distance from the true bias as is very close to. However, as goes larger, UWYH bias approximation gets closer to the true bias with the exception of Levy process with an unknown mean and random x under which all bias approximation has a large discrepancy from the true bias. All simulation results in this seciton illustrate that considering nonnormality and higher order 6
27 bias approximation is useful to improve the e ciency and accuracy of the mean reversion parameter estimation in nite samples. 5 Conclusions This paper considers the nonnormality of error tems under both Lévy processes with a known mean and with an unknown mean. We obtain the bias approximations of the mean reversion parameter estimator under general errors and nd that the skewness ( ) and kurtorsis ( ), the starting data point, the long term mean (); the di usion parameter ( ); and itself (), all a ect the bias of : Monte Carlo simulations provide strong supports that our proposed bias corrected estimator of the mean reversion parameter is e cient in nite samples. 7
Xiaohu Wang, Peter C. B. Phillips and Jun Yu. January 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1778
BIAS IN ESTIMATING MULTIVARIATE AND UNIVARIATE DIFFUSIONS By Xiaohu Wang, Peter C. B. Phillips and Jun Yu January 20 COWLES FOUNDATION DISCUSSION PAPER NO. 778 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS
More informationTesting for a Trend with Persistent Errors
Testing for a Trend with Persistent Errors Graham Elliott UCSD August 2017 Abstract We develop new tests for the coe cient on a time trend in a regression of a variable on a constant and time trend where
More informationECONOMETRICS FIELD EXAM Michigan State University May 9, 2008
ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within
More informationComparing Nested Predictive Regression Models with Persistent Predictors
Comparing Nested Predictive Regression Models with Persistent Predictors Yan Ge y and ae-hwy Lee z November 29, 24 Abstract his paper is an extension of Clark and McCracken (CM 2, 25, 29) and Clark and
More informationProcesses with Volatility-Induced Stationarity. An Application for Interest Rates. João Nicolau
Processes with Volatility-Induced Stationarity. An Application for Interest Rates João Nicolau Instituto Superior de Economia e Gestão/Univers. Técnica de Lisboa Postal address: ISEG, Rua do Quelhas 6,
More informationTesting for Regime Switching: A Comment
Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara
More informationOn GMM Estimation and Inference with Bootstrap Bias-Correction in Linear Panel Data Models
On GMM Estimation and Inference with Bootstrap Bias-Correction in Linear Panel Data Models Takashi Yamagata y Department of Economics and Related Studies, University of York, Heslington, York, UK January
More informationModeling financial time series through second order stochastic differential equations
Modeling financial time series through second order stochastic differential equations João Nicolau To cite this version: João Nicolau. Modeling financial time series through second order stochastic differential
More informationProblem set 1 - Solutions
EMPIRICAL FINANCE AND FINANCIAL ECONOMETRICS - MODULE (8448) Problem set 1 - Solutions Exercise 1 -Solutions 1. The correct answer is (a). In fact, the process generating daily prices is usually assumed
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More informationIn-fill Asymptotic Theory for Structural Break Point in Autoregression: A Unified Theory
In-fill Asymptotic Theory for Structural Break Point in Autoregression: A Unified Theory Liang Jiang Singapore Management University Xiaohu Wang The Chinese University of Hong Kong Jun Yu Singapore Management
More informationEstimating the Number of Common Factors in Serially Dependent Approximate Factor Models
Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Ryan Greenaway-McGrevy y Bureau of Economic Analysis Chirok Han Korea University February 7, 202 Donggyu Sul University
More informationOptimal Jackknife for Unit Root Models
Optimal Jackknife for Unit Root Models Ye Chen and Jun Yu Singapore Management University October 19, 2014 Abstract A new jackknife method is introduced to remove the first order bias in the discrete time
More informationLeast Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises
Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Hongwei Long* Department of Mathematical Sciences, Florida Atlantic University, Boca Raton Florida 33431-991,
More informationA Course on Advanced Econometrics
A Course on Advanced Econometrics Yongmiao Hong The Ernest S. Liu Professor of Economics & International Studies Cornell University Course Introduction: Modern economies are full of uncertainties and risk.
More informationInference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation
Inference about Clustering and Parametric Assumptions in Covariance Matrix Estimation Mikko Packalen y Tony Wirjanto z 26 November 2010 Abstract Selecting an estimator for the variance covariance matrix
More informationECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications
ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications Yongmiao Hong Department of Economics & Department of Statistical Sciences Cornell University Spring 2019 Time and uncertainty
More informationA Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter
A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Patrik Guggenberger Department
More informationIn the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now
PERMANENT INCOME AND OPTIMAL CONSUMPTION On the previous notes we saw how permanent income hypothesis can solve the Consumption Puzzle. Now we use this hypothesis, together with assumption of rational
More informationGMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails
GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,
More information1 Correlation between an independent variable and the error
Chapter 7 outline, Econometrics Instrumental variables and model estimation 1 Correlation between an independent variable and the error Recall that one of the assumptions that we make when proving the
More informationModelling Multivariate Interest Rates using Copulas and Reducible Stochastic Di erential Equations
Modelling Multivariate Interest Rates using Copulas and Reducible Stochastic Di erential Equations Ruijun Bu University of Liverpool Kaddr Hadri Queen s University Belfast Ludovic Giet GREQAM Michel Lubrano
More informationDoes k-th Moment Exist?
Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,
More informationLeast Squares Bias in Time Series with Moderate. Deviations from a Unit Root
Least Squares Bias in ime Series with Moderate Deviations from a Unit Root Marian Z. Stoykov University of Essex March 7 Abstract his paper derives the approximate bias of the least squares estimator of
More informationGMM estimation of spatial panels
MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted
More informationBootstrapping Long Memory Tests: Some Monte Carlo Results
Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin University College Dublin and Cass Business School. July 2004 - Preliminary Abstract We investigate the bootstrapped
More informationTaylor series - Solutions
Taylor series - Solutions. f(x) sin(x) sin(0) + x cos(0) + x x ( sin(0)) +!! ( cos(0)) + + x4 x5 (sin(0)) + 4! 5! 0 + x + 0 x x! + x5 5! x! + 0 + x5 (cos(0)) + x6 6! ( sin(0)) + x 7 7! + x9 9! 5! + 0 +
More informationOn the Power of Tests for Regime Switching
On the Power of Tests for Regime Switching joint work with Drew Carter and Ben Hansen Douglas G. Steigerwald UC Santa Barbara May 2015 D. Steigerwald (UCSB) Regime Switching May 2015 1 / 42 Motivating
More informationNotes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770
Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill November
More information2014 Preliminary Examination
014 reliminary Examination 1) Standard error consistency and test statistic asymptotic normality in linear models Consider the model for the observable data y t ; x T t n Y = X + U; (1) where is a k 1
More informationBias-Correction in Vector Autoregressive Models: A Simulation Study
Econometrics 2014, 2, 45-71; doi:10.3390/econometrics2010045 OPEN ACCESS econometrics ISSN 2225-1146 www.mdpi.com/journal/econometrics Article Bias-Correction in Vector Autoregressive Models: A Simulation
More informationLECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT
MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes
More informationExpectation of Quadratic Forms in Normal and Nonnormal Variables with Econometric Applications
Expectation of Quadratic Forms in Normal and Nonnormal Variables with Econometric Applications Yong Bao y Department of Economics Temple University Aman Ullah z Department of Economics University of California,
More informationComment on HAC Corrections for Strongly Autocorrelated Time Series by Ulrich K. Müller
Comment on HAC Corrections for Strongly Autocorrelated ime Series by Ulrich K. Müller Yixiao Sun Department of Economics, UC San Diego May 2, 24 On the Nearly-optimal est Müller applies the theory of optimal
More informationWhen is it really justifiable to ignore explanatory variable endogeneity in a regression model?
Discussion Paper: 2015/05 When is it really justifiable to ignore explanatory variable endogeneity in a regression model? Jan F. Kiviet www.ase.uva.nl/uva-econometrics Amsterdam School of Economics Roetersstraat
More informationThe Case Against JIVE
The Case Against JIVE Related literature, Two comments and One reply PhD. student Freddy Rojas Cama Econometrics Theory II Rutgers University November 14th, 2011 Literature 1 Literature 2 Key de nitions
More informationEnvironmental Econometrics
Environmental Econometrics Syngjoo Choi Fall 2008 Environmental Econometrics (GR03) Fall 2008 1 / 37 Syllabus I This is an introductory econometrics course which assumes no prior knowledge on econometrics;
More informationPROD. TYPE: COM ARTICLE IN PRESS. Computational Statistics & Data Analysis ( )
COMSTA 28 pp: -2 (col.fig.: nil) PROD. TYPE: COM ED: JS PAGN: Usha.N -- SCAN: Bindu Computational Statistics & Data Analysis ( ) www.elsevier.com/locate/csda Transformation approaches for the construction
More informationChapter 2. Dynamic panel data models
Chapter 2. Dynamic panel data models School of Economics and Management - University of Geneva Christophe Hurlin, Université of Orléans University of Orléans April 2018 C. Hurlin (University of Orléans)
More informationPanel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43
Panel Data March 2, 212 () Applied Economoetrics: Topic March 2, 212 1 / 43 Overview Many economic applications involve panel data. Panel data has both cross-sectional and time series aspects. Regression
More informationMarkov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1
Markov-Switching Models with Endogenous Explanatory Variables by Chang-Jin Kim 1 Dept. of Economics, Korea University and Dept. of Economics, University of Washington First draft: August, 2002 This version:
More informationA nonparametric method of multi-step ahead forecasting in diffusion processes
A nonparametric method of multi-step ahead forecasting in diffusion processes Mariko Yamamura a, Isao Shoji b a School of Pharmacy, Kitasato University, Minato-ku, Tokyo, 108-8641, Japan. b Graduate School
More informationTowards inference for skewed alpha stable Levy processes
Towards inference for skewed alpha stable Levy processes Simon Godsill and Tatjana Lemke Signal Processing and Communications Lab. University of Cambridge www-sigproc.eng.cam.ac.uk/~sjg Overview Motivation
More informationNonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity
Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity Songnian Chen a, Xun Lu a, Xianbo Zhou b and Yahong Zhou c a Department of Economics, Hong Kong University
More informationMulti-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes
Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes Anatoliy Swishchuk Department of Mathematics and Statistics University of Calgary Calgary, Alberta, Canada Lunch at the Lab Talk
More informationEconomics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions
Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou
More informationChapter 2. GMM: Estimating Rational Expectations Models
Chapter 2. GMM: Estimating Rational Expectations Models Contents 1 Introduction 1 2 Step 1: Solve the model and obtain Euler equations 2 3 Step 2: Formulate moment restrictions 3 4 Step 3: Estimation and
More informationEconomics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation
Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen
More informationComparing the asymptotic and empirical (un)conditional distributions of OLS and IV in a linear static simultaneous equation
Comparing the asymptotic and empirical (un)conditional distributions of OLS and IV in a linear static simultaneous equation Jan F. Kiviet and Jerzy Niemczyk y January JEL-classi cation: C3, C, C3 Keywords:
More informationEcon 623 Econometrics II Topic 2: Stationary Time Series
1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the
More informationEconomics 620, Lecture 18: Nonlinear Models
Economics 620, Lecture 18: Nonlinear Models Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 18: Nonlinear Models 1 / 18 The basic point is that smooth nonlinear
More informationParametric Inference on Strong Dependence
Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation
More informationIntroduction: structural econometrics. Jean-Marc Robin
Introduction: structural econometrics Jean-Marc Robin Abstract 1. Descriptive vs structural models 2. Correlation is not causality a. Simultaneity b. Heterogeneity c. Selectivity Descriptive models Consider
More informationTesting Weak Convergence Based on HAR Covariance Matrix Estimators
Testing Weak Convergence Based on HAR Covariance atrix Estimators Jianning Kong y, Peter C. B. Phillips z, Donggyu Sul x August 4, 207 Abstract The weak convergence tests based on heteroskedasticity autocorrelation
More informationMeasuring robustness
Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely
More informationA formal statistical test for the number of factors in. the approximate factor models
A formal statistical test for the number of factors in the approximate factor models Alexei Onatski Economics Department, Columbia University September 26, 2006 Abstract In this paper we study i.i.d. sequences
More informationSimple Estimators for Semiparametric Multinomial Choice Models
Simple Estimators for Semiparametric Multinomial Choice Models James L. Powell and Paul A. Ruud University of California, Berkeley March 2008 Preliminary and Incomplete Comments Welcome Abstract This paper
More informationCAE Working Paper # Fixed-b Asymptotic Approximation of the Sampling Behavior of Nonparametric Spectral Density Estimators
CAE Working Paper #06-04 Fixed-b Asymptotic Approximation of the Sampling Behavior of Nonparametric Spectral Density Estimators by Nigar Hashimzade and Timothy Vogelsang January 2006. Fixed-b Asymptotic
More informationSpeci cation of Conditional Expectation Functions
Speci cation of Conditional Expectation Functions Econometrics Douglas G. Steigerwald UC Santa Barbara D. Steigerwald (UCSB) Specifying Expectation Functions 1 / 24 Overview Reference: B. Hansen Econometrics
More informationThe Impact of a Hausman Pretest on the Size of a Hypothesis Test: the Panel Data Case
The Impact of a Hausman retest on the Size of a Hypothesis Test: the anel Data Case atrik Guggenberger Department of Economics UCLA September 22, 2008 Abstract: The size properties of a two stage test
More informationMC3: Econometric Theory and Methods. Course Notes 4
University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew
More informationA Semiparametric Conditional Duration Model
A Semiparametric Conditional Duration Model Mardi Dungey y Xiangdong Long z Aman Ullah x Yun Wang { April, 01 ABSTRACT We propose a new semiparametric autoregressive duration (SACD) model, which incorporates
More informationECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University
ECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University Instructions: Answer all four (4) questions. Be sure to show your work or provide su cient justi cation for
More informationSTATIONARY PROCESSES THAT LOOK LIKE RANDOM WALKS - THE BOUNDED RANDOM WALK PROCESS IN DISCRETE AND CONTINUOUS TIME
STATIONARY PROCESSES THAT LOOK LIKE RANDOM WALKS - THE BOUNDED RANDOM WALK PROCESS IN DISCRETE AND CONTINUOUS TIME João Nicolau Instituto Superior de Economia e Gestão Universidade Técnica de Lisboa O
More informationCorrections to Theory of Asset Pricing (2008), Pearson, Boston, MA
Theory of Asset Pricing George Pennacchi Corrections to Theory of Asset Pricing (8), Pearson, Boston, MA. Page 7. Revise the Independence Axiom to read: For any two lotteries P and P, P P if and only if
More informationA CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald W.K. Andrews and Patrik Guggenberger
A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER By Donald W.K. Andrews and Patrik Guggenberger August 2011 Revised December 2012 COWLES FOUNDATION DISCUSSION
More informationGARCH Models Estimation and Inference. Eduardo Rossi University of Pavia
GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function
More informationAsymptotic distribution of the sample average value-at-risk
Asymptotic distribution of the sample average value-at-risk Stoyan V. Stoyanov Svetlozar T. Rachev September 3, 7 Abstract In this paper, we prove a result for the asymptotic distribution of the sample
More informationEconomics 241B Estimation with Instruments
Economics 241B Estimation with Instruments Measurement Error Measurement error is de ned as the error resulting from the measurement of a variable. At some level, every variable is measured with error.
More informationxtunbalmd: Dynamic Binary Random E ects Models Estimation with Unbalanced Panels
: Dynamic Binary Random E ects Models Estimation with Unbalanced Panels Pedro Albarran* Raquel Carrasco** Jesus M. Carro** *Universidad de Alicante **Universidad Carlos III de Madrid 2017 Spanish Stata
More informationOn Bias in the Estimation of Structural Break Points
On Bias in the Estimation of Structural Break Points Liang Jiang, Xiaohu Wang and Jun Yu December 14 Paper No. -14 ANY OPINIONS EXPRESSED ARE HOSE OF HE AUHOR(S) AND NO NECESSARILY HOSE OF HE SCHOOL OF
More informationSome Recent Developments in Spatial Panel Data Models
Some Recent Developments in Spatial Panel Data Models Lung-fei Lee Department of Economics Ohio State University l ee@econ.ohio-state.edu Jihai Yu Department of Economics University of Kentucky jihai.yu@uky.edu
More informationBootstrapping the Grainger Causality Test With Integrated Data
Bootstrapping the Grainger Causality Test With Integrated Data Richard Ti n University of Reading July 26, 2006 Abstract A Monte-carlo experiment is conducted to investigate the small sample performance
More informationStochastic Processes
Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space
More informationGMM Estimation with Noncausal Instruments
ömmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff Discussion Papers GMM Estimation with Noncausal Instruments Markku Lanne University of Helsinki, RUESG and HECER and Pentti Saikkonen
More informationInstead of using all the sample observations for estimation, the suggested procedure is to divide the data set
Chow forecast test: Instead of using all the sample observations for estimation, the suggested procedure is to divide the data set of N sample observations into N 1 observations to be used for estimation
More informationBootstrapping Long Memory Tests: Some Monte Carlo Results
Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin Nu eld College, Oxford and Lancaster University. December 2005 - Preliminary Abstract We investigate the bootstrapped
More informationNormal Probability Plot Probability Probability
Modelling multivariate returns Stefano Herzel Department ofeconomics, University of Perugia 1 Catalin Starica Department of Mathematical Statistics, Chalmers University of Technology Reha Tutuncu Department
More informationIntroduction to Rare Event Simulation
Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31
More informationEstimation and Inference with Weak Identi cation
Estimation and Inference with Weak Identi cation Donald W. K. Andrews Cowles Foundation Yale University Xu Cheng Department of Economics University of Pennsylvania First Draft: August, 2007 Revised: March
More informationInference for Lévy-Driven Continuous-Time ARMA Processes
Inference for Lévy-Driven Continuous-Time ARMA Processes Peter J. Brockwell Richard A. Davis Yu Yang Colorado State University May 23, 2007 Outline Background Lévy-driven CARMA processes Second order properties
More informationUniversity of Toronto
A Limit Result for the Prior Predictive by Michael Evans Department of Statistics University of Toronto and Gun Ho Jang Department of Statistics University of Toronto Technical Report No. 1004 April 15,
More informationSerial Correlation Robust LM Type Tests for a Shift in Trend
Serial Correlation Robust LM Type Tests for a Shift in Trend Jingjing Yang Department of Economics, The College of Wooster Timothy J. Vogelsang Department of Economics, Michigan State University March
More informationNotes on Generalized Method of Moments Estimation
Notes on Generalized Method of Moments Estimation c Bronwyn H. Hall March 1996 (revised February 1999) 1. Introduction These notes are a non-technical introduction to the method of estimation popularized
More informationGLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses
GLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses Josep Lluís Carrion-i-Silvestre University of Barcelona Dukpa Kim Boston University Pierre Perron
More informationSIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011
SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1815 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More informationRobust Con dence Intervals in Nonlinear Regression under Weak Identi cation
Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation Xu Cheng y Department of Economics Yale University First Draft: August, 27 This Version: December 28 Abstract In this paper,
More informationLecture Notes on Measurement Error
Steve Pischke Spring 2000 Lecture Notes on Measurement Error These notes summarize a variety of simple results on measurement error which I nd useful. They also provide some references where more complete
More informationE cient Method of Moments Estimators for Integer Time Series Models. Vance L. Martin University of Melbourne
E cient Method of Moments Estimators for Integer Time Series Models Vance L. Martin University of Melbourne A.R.Tremayne University of New South Wales and University of Liverpool Robert C. Jung Universität
More informationECON2285: Mathematical Economics
ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal
More informationLong-Horizon Regressions when the Predictor is Slowly Varying 1
Long-Horizon Regressions when the Predictor is Slowly Varying Roger Moon USC Antonio Rubia University of Alicante and UCSD 3 November 3, 5 Rossen Valkanov UCSD 4 We thank Alberto Plazzi, Pedro Santa-Clara,
More informationAffine Processes. Econometric specifications. Eduardo Rossi. University of Pavia. March 17, 2009
Affine Processes Econometric specifications Eduardo Rossi University of Pavia March 17, 2009 Eduardo Rossi (University of Pavia) Affine Processes March 17, 2009 1 / 40 Outline 1 Affine Processes 2 Affine
More informationNorwegian-Ukrainian Winter School 2018 on Stochastic Analysis, Probability Theory and Related Topics Abstracts of the presentations
Norwegian-Ukrainian Winter School 2018 on Stochastic Analysis, Probability Theory and Related Topics s of the presentations Monday, 22. January: Professor Bernt Øksendal, University of Oslo, Norway Title:
More informationChapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models
Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................
More informationShort T Dynamic Panel Data Models with Individual and Interactive Time E ects
Short T Dynamic Panel Data Models with Individual and Interactive Time E ects Kazuhiko Hayakawa Hiroshima University M. Hashem Pesaran University of Southern California, USA, and Trinity College, Cambridge
More information11. Bootstrap Methods
11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods
More informationBayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference
Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Osnat Stramer 1 and Matthew Bognar 1 Department of Statistics and Actuarial Science, University of
More informationEstimation of a Local-Aggregate Network Model with. Sampled Networks
Estimation of a Local-Aggregate Network Model with Sampled Networks Xiaodong Liu y Department of Economics, University of Colorado, Boulder, CO 80309, USA August, 2012 Abstract This paper considers the
More informationVolatility. Gerald P. Dwyer. February Clemson University
Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use
More information