On the Asymptotic E ciency of GMM

Size: px
Start display at page:

Download "On the Asymptotic E ciency of GMM"

Transcription

1 On the Asymptotic E ciency of GMM Marine Carrasco y Université de Montréal Jean-Pierre Florens z Toulouse School of Economics Abstract It is well-known that the method of moments (MM) estimator is usually ine cient relative to the maximum likelihood estimator (MLE) unless the score is one of the moment conditions. First, this paper extends this result by giving a necessary and su cient condition for the generalized MM (GMM) estimator to be as e cient as the maximum likelihood estimator. We show that the variance of the GMM estimator equals the Cramer Rao bound if and only if the true score belongs to the closure of the linear space spanned by the moment conditions. This result generalizes former ones in two dimensions: (a) the moments may be correlated, (b) the number of moment restrictions may be in nite. Second, we derive the semiparametric e ciency bound when the observations are known to be Markov and satisfy a conditional moment restriction. We show that it coincides with the asymptotic variance of the optimal GMM estimator, thus extending results by Chamberlain (1987) to a dynamic setting. Moreover, we show that this bound is attainable using a continuum of moment conditions. KEY WORDS: Information matrix, GMM, in nity of moment conditions, reproducing kernel Hilbert space, semiparametric e ciency bound. The authors wish to thank the co-editor Yuichi Kitamura and two anonymous referees for helpful suggestions. We also thank Debopam Bhattacharya and the participants of the IEE conference in honor of Arnold ellner (Washington D.C., 2003), the Winter Meeting of the Econometric Society (San Diego, 2004), and the CIRANO and CIREQ Conference on Operator Methods (Montreal, 2004). Carrasco gratefully acknowledges partial nancial support from the National Science Foundation, grant # SES y University of Montreal, Departement de Sciences Economiques, CP 6128, succ Centre Ville, Montreal, QC H3C3J7, Canada. marine.carrasco@umontreal.ca. z Toulouse School of Economics, IDEI, 21 allée de Brienne, Toulouse, France. orens@cict.fr.

2 1. Introduction The method of moments was rst introduced by Pearson (1894) but was partly abandoned after Fisher (1922) demonstrated that it was ine cient relative to the maximum likelihood estimation. Later, the method of moments, which was extended into both the estimating function approach (Godambe, 1960 and Durbin, 1960), and the Generalized Method of Moments (GMM) (Hansen, 1982) became popular again because it does not require a complete speci cation of the model. Godambe (1960) demonstrates that the MM estimator based on the score is as e cient as the maximum likelihood estimator (MLE). However, using the score is not the only way for GMM estimators to reach e ciency. This paper aims to shed new light on the e ciency of GMM estimators. To discuss e ciency, it is important to be able to handle an in nite as well a nite number of moment conditions. Indeed, it will become apparent later on that often the only way to reach e ciency is to use an in nite number of moment conditions. We start by giving a general framework for GMM estimation that applies indi erently to a nite and in nite number (possibly a continuum) of moment conditions. In this framework, the asymptotic variance of the GMM estimator takes the form of an inner product in a Reproducing Kernel Hilbert Space (RKHS) associated with the covariance operator, as shown in Carrasco and Florens (2000) and Carrasco, Chernov, Ghysels, and Florens (2007). Using results by Parzen (1959, 1970), we show that the asymptotic variance can be rewritten as an inner product in a dual space of the RKHS that is easier to handle. This provides manageable formulas to compute the asymptotic variance even in challenging cases involving the autocorrelation of the moments. Then, we apply this result to address the e ciency of GMM in two cases of interest: Parametric and semiparametric. First, we address the following simple question: Under which conditions on the set of moment restrictions is the (optimal) GMM estimator asymptotically as e cient as the MLE? To answer this question, it su ces to compare the asymptotic variance of ^ with the inverse of Fisher s information matrix. We show that the GMM estimator reaches the Cramer Rao bound if and only if the Data Generating Process (DGP) score belongs to the closure 1 of the set of moment conditions. Although this result may sound familiar, it generalizes former results in two dimensions: (a) the moment conditions may be autocorrelated, (b) the 1 Note that when the set is in nite dimensional, the closure of the set may contain the score even if the individual moment conditions are very di erent from the score. 2

3 number of moment conditions may be in nite since we allow for both a continuum and a countably in nite number of moment conditions. Our result can be particularly useful to devise an e cient GMM estimator in the cases where MLE is either intractable or burdensome. For instance, it permits us to establish that the GMM estimator based on power functions or on the characteristic function is asymptotically e cient. In particular, it implies that the GMM estimator based on the joint characteristic function of (Y t ; Y t 1 ; :::; Y t p ) is asymptotically e cient when Y t is Markov of order p: To our knowledge, we are the rst to give a formal proof of this result. Second, we consider a dynamic model where the only knowledge of the underlying model is that the observations are Markov and a conditional moment restriction is satis ed. We show that the GMM bound provides an e ciency bound in the much larger class of regular estimators. Our results extend those of Chamberlain (1987), Hansen (1993), and Wefelmeyer (1996), among others. The rest of the paper is organized as follows. Section 2 presents a general framework for GMM estimation with a nite or in nite number of moments. Section 3 characterizes the asymptotic variance of GMM in terms of inner products in a RKHS. Section 4 investigates the (parametric) asymptotic e ciency of GMM. Section 5 gives the semiparametric e ciency bound for conditional moment restrictions. Section 6 concludes. Appendix A gives a brief review of the main de- nitions and properties of operators and reproducing kernel Hilbert spaces. The proofs of the main propositions are in Appendix B. 2. A general framework for GMM estimation This section provides a general framework for GMM that permits us to handle indi erently nitely as well as in nitely many moment restrictions. Assume that the process fy t g is stationary and ergodic: Y may be multivariate 2 so that Y t 2 R l : The observations are given by fy 1 ; :::; Y T g : They are drawn from a distribution F 0 : To allow for the most general setting, F 0 is not assumed to belong to a parametric family. Let 2 R q : De ne L 2 (Y ) as the set G (Y ) je F 0 G (Y ) 2 < 1. Inference is based on a set of moment conditions 2 This framework covers the case Y t = ( t ; t 1 ; :::; t l+1 ) 0 where t is a univariate timeseries. E F 0 [h (; Y t ; )] = 0, 2 I (2.1) 3

4 where h (; Y t ; ) is a function from I R l to R or C: Note that if h is a vector, h (; Y t ; ) selects the th row of the vector h. Assume that h (; Y t ; ) 2 L 2 (Y ) ; for any 2 I and 2 : Identi cation condition: E F 0 [h (; Y t ; )] = 0, 8 2 I, = 0 : (2.2) The moment conditions may be nite, as in the traditional method of moments, or in nite. It may be countable, for instance l = 1 and h (; Y t ; ) = Yt E F 0 (Yt ) with = 1; 2; ::: Also it may be a continuum, for instance h (; Y t ; ) = exp (i 0 Y t ) () where () = E F 0 [exp (i 0 Y t )] is the characteristic function of Y t and 2 R l : For e ciency considerations, we might want to base the estimation on a full set of moment conditions indexed by 2 I even if identi- cation is achieved from a nite subset of I: Let be a positive measure de- ned on I with support identical to I. is chosen so that h (; Y t ; ) belongs to L 2 I R l ; F 0, which implies in particular that h (; Yt ; ) as a function of belongs to L 2 (I; ) ; the Hilbert space of complex or real valued functions that are square integrable with respect to. It should be stressed that is introduced for technical reason. We will see in Section 3.3. that the asymptotic variance of the GMM estimator is independent of (but not of its support I). For complex valued functions, we write L 2 (I; ) = g : I! Cj jg ()j 2 (d) < 1 : I The inner product on this space is hf; gi = I f () g () (d) : where g denotes the complex conjugate of g. The term kgk 2 denotes the norm of g in L 2 (I; ) : Note that L 2 (I; ) is always separable, it has a countable basis and hence is isomorphic to l 2 : Depending on whether the number of moment restrictions is nite or in nite, the norm in L 2 (I; ) may take various forms. Examples of spaces L 2 (I; ) are: Finite number of restrictions: Let I = f1; :::; Mg and be the uniform probability measure on I. Then, the norm is the euclidean norm kgk 2 = 1 M P M =1 jg()j2 : 4

5 Countably in nite number of restrictions: Let I = N and if we de ne as a counting measure i.e. (A) = cardinal(a); we have L 2 (I; ) = l 2 the space of functions g = (g (1) ; g (2) ; :::) such that is kgk 2 = P 1 =0 jg ()j2 < 1. However, this space may be too small to include some moment conditions of interest (for instance, 1 is not square integrable with respect to the counting measure). A way to enlarge this space is to choose to be a probability measure on N (for instance a Poisson probability measure such that fg = e 1 =!, = 0; 1; 2; :::); then L 2 (I; ) de ned as the set of function g such that kgk 2 = P 1 =0 jg ()j2 () < 1 is larger than l 2 : Continuum of restrictions on an interval: Let I = [0; 1]; maybe chosen as Lebesgue measure on [0; 1]; then the norm is kgk 2 = R 1 0 jg()j2 d: Continuum of restrictions on R : Let I = R; here again could be set equal to Lebesgue measure if it is believed that the moment functions h (; Y t ; ) are square integrable with respect to Lebesgue. In some cases, this is not realistic and it may be useful to enlarge the space using an appropriate choice of. For example, h (; Y t ; ) = exp (i 0 Y t ) () is not square integrable with respect to Lebesgue but is square integrable with respect to any probability measure on R: Then, the norm is kgk 2 = R jg()j 2 (d) : Vector of continuum of restrictions: Let I = f1; :::; Mg R; then could be chosen as the product measure of a uniform distribution on f1; :::; Mg and the normal distribution. Let K be the covariance operator with kernel k (; ) = P h i 1 j= 1 EF 0 h (; Y t ; 0 ) h (; Y t j ; 0 ) : It associates to a function g of L 2 (I; ) the element of L 2 (I; ) : Kg () = k (; ) g () (d) : (2.3) I For P instance, in the case of a countably in nite number of moments Kg () = 1 =1 k (; ) g () () ; so that K is an in nite dimensional matrix. Kuersteiner (2001) handles such an operator for the estimation of ARMA models by instrumental variables. P Let h T (:; ) = 1 T T t=1 h (:; y t; ) : The GMM estimators are solutions of min kbh T (:; )k 2 ; 5

6 where B is an operator from L 2 (I; ) in L 2 (I; ) : Denote K 1 f the solution g (when it exists) of the equation Kg = f and K 1=2 = (K 1 ) 1=2 : The optimal GMM estimator in the class of GMM estimators is obtained for B = K 1=2 : Let K T be an estimator of K. In the case of a nite number of moments, we have the usual GMM objective function h 0 T K 1 T h T where h T = (h T (1; ) ; :::; h T (M; )) 0 and K is the asymptotic covariance matrix of h T : Then K 1 T exists as soon as 0 is not an eigenvalue of K T. This simple result does not hold when I is in nite dimensional. In general, K is compact and hence the equation Kg = f has a solution only for f in the range of K which is a strict subset of L 2 (I; ). Moreover, when the solution exists, it is unstable in g. It should be stressed that the problem of the existence and stability of the solution to an equation Kg = f arises whether K is an integral operator (case of a continuum of moment conditions) or K is an in nite dimensional matrix (case of countable moments). The degree of di culty is the same in both cases, see Böttcher et al. (1996, Lecture Series 1). When I has in nite dimension, K 1=2 is not de ned on the whole space L 2 (I; ) but on a subset denoted H(K) and called the Reproducing Kernel Hilbert Space (RKHS): We use the notation (g; f) K K 1=2 g; K 1=2 f for the inner product in the RKHS associated with K. As the inverse of K is not continuous, it needs to be stabilized by the introduction of a regularization (or smoothing) term denoted T. (K T ) 1 is called regularized inverse of K if it has the property that (K T ) 1 f K 1 f! 0 as T goes to zero, for all f in the range of K. Let (K T T ) 1=2 = (K T T ) 1 1=2 be the regularized estimate of K 1=2. The GMM estimator is de ned as (K ^ = arg min T T ) 1=2 h T (:; ) 2 : (2.4) In Carrasco and Florens (2000) and Carrasco, Chernov, Florens, and Ghysels (2007), we apply the Tikhonov regularization which consists in replacing (K T T ) 1=2 by (KT 2 + T I) 1=2 K 1=2 T. Then, the objective function in (2.4) can be rewritten in terms of products of vectors and matrices, as shown in Carrasco, Chernov, Florens, and Ghysels (2007, Proposition 3.4.). Besides Tikhonov, other forms of regularization could be used, see Kress (1999) and Carrasco, Florens, and Renault (2007). Carrasco (2007) investigates two other regularizations for the estimation of a linear IV model: one iterative method called Landweber Fridman and another consisting in using the eigenfunctions of K that are associated with the largest eigenvalues, called Spectral cut-o. Another approach, frequently used when the moment conditions are countable, consists in truncating the sequence of moment 6

7 conditions. This method can be regarded as a regularization scheme because it also involves a smoothing parameter which is the number of moment conditions used in the estimation. Engl, Hanke, and Neubauer (2000, Sections 3.2. and 5.2.) refers to this method as projection and recommend using it jointly with Tikhonov regularization when the data are noisy. Kuersteiner (2001) shows that e ciency (in a GMM sense) can be achieved by letting the number of moment conditions used in the estimation grows as the sample size grows. Donald and Newey (2001) give a method for selecting the optimal number of instruments by minimizing the mean square error of the estimator. One di erence between projection and the other three regularizations mentioned earlier is that the projection requires to have an a priori ordering of the instruments or moment conditions. The other methods are based on the spectral decomposition of the covariance operator and avoid ordering the moment conditions a priori. As our focus is on e ciency, we do not discuss the implementation of GMM any further. Under some regularity conditions and an appropriate decay rate of T toward zero, ^ satis es p T ^ 0 d! N 0; 1 (2.5) where is the q q matrix with (i; j) element equal to E F 0 (5 i h ()) ; E F 0 5 j h () K = 0 : (2.6) For I nite, Formula (2.6) boils down to the familiar expression E F 0 (5 h) K 1 E F 0 (5 h) 0 1 =0 using matrix notation. The result of (2.5) is shown in Carrasco and Florens (2000) for I = [0; 1] and iid data and in Carrasco, Chernov, Florens, and Ghysels (2007) for I = R p and time series: In presence of a countably in nite number of moments, the result has been established by Kuersteiner (2001) using projection and Carrasco (2007) for three other regularizations. In the sequel, we assume that the conditions under which the result of (2.5) holds, are satis ed. These regularity conditions include the di erentiability of h with respect to and a functional central limit theorem to guarantee that p T ht ( 0 ) d! N (0; K) in L 2 (I; ) : 7

8 3. Norm in a RKHS and variance of GMM estimator Our aim is to compute the elements hf; gi K that show up in the inverse of the asymptotic variance of the GMM estimator (Equation (2.6)). Calculation of this variance will permit us to establish whether the estimator is e cient or not Norm in a RKHS Let K be the covariance operator de ned in (2.3). The kernel k can be written as where (h () ; h ()) L 2 (F 0 ) = 1 X k(; ) = (h () ; h ()) L 2 (F 0 ) (3.1) j= 1 E F 0 h (; Y 0 ; 0 ) h (; Y j ; 0 ) and L 2 (F 0 ) is the Hilbert space 3 of complex-valued functions ' : R l! C such that P 1 j= 1 EF 0 ' (Y 0 ) ' (Y j ) < 1 and (; ) L 2 (F 0 ) = 1 X j= 1 E F 0 (Y 0 ) (Y j ) : The case where the moment functions are uncorrelated can be regardedas a special case of (3.1) where (h () ; h ()) L 2 (F 0 ) = EF 0 h (; Y t ; 0 ) h (; Y t ; 0 ) and therefore is not treated separately. De ne L 2 (h) as the closure of the subspace of L 2 (F 0 ) spanned by fh () ; 2 Ig, it consists of linear combinations (for some integer n; real! 1 ; :::;! n and indices 1 ; :::; n ) of the form nx G n =! l h ( l ) : l=1 and their limits in mean square, that is random variables G such that kg n Gk 2 L 2 (F 0 )! 0 as n goes to in nity. Let H(K) denote the reproducing kernel Hilbert space (RKHS) associated with the kernel k. It is a subspace of functions de ned on I. A de nition and some 3 The space L 2 (F 0 ) coincides with L 2 (Y ), the Hilbert space of square integrable functions with respect to F 0 only for the uncorrelated case. 8

9 properties are given in Appendix A. The notations kfk K and hf; gi K represent the norm and inner product in H(K). The following proposition gives a way to compute hf; gi K for any f, g 2 H(K): Let o G g = ng 2 L 2 (F 0 ) : g () = (G; h ()) L 2(F0) ; 8 2 I : Proposition 3.1. Let k be a covariance kernel de ned in (3.1) and H(K) be its associated RKHS. (a) H(K) and L 2 (h) are isometrically isomorphic, i.e. there exists a one-to-one linear mapping J ~ (not necessarily unique) from H(K) to L 2 (h) such that hf; gi K = ~J (f) ; J ~ (g) L 2 (F 0 ) for all f; g 2 H(K): ~ J is referred to as Hilbert space isomorphism from H(K) to L 2 (h) : (b) Let J be the Hilbert space isomorphism from H(K) to L 2 (h) that transforms k (:; ) into h (), i.e. J (k (:; )) = h () : We have J (g) = G and kgk 2 K = kgk 2 L 2 (F 0 ) where G is the unique element of G g \ L 2 (h) : (c) Another characterization of J is the following: J (g) = arg min G2Gg kgk 2 L 2 (F 0 ) and hence kgk 2 K = min G2G g kgk 2 L 2 (F 0 ) : The results of Proposition 3.1 are scattered in Parzen (1959, 1970) and Saitoh (1997). Although these results are not new, they have not been presented under this compact form before. In Appendix B, we provide a personal proof of this proposition that we think is useful to understand the role played by the reproducing property. Note that (a) and (b) imply that for any g 1 ; g 2 2 H(K) and G i 2 G i \ L 2 (h), i = 1; 2 (where G i denotes the set G g associated with g i ); we have hg 1 ; g 2 i K = (G 1 ; G 2 ) L 2 (F 0 ) : 3.2. Asymptotic variance of GMM estimator To calculate the asymptotic variance of the GMM estimator, we apply the results from Proposition 3.1 to Equation (2.6) and set g i = E F 0 (r i h (; Y t ; )) =o. De ne G i = ng 2 L 2 (F 0 ) : E F 0 (r i h (; Y t ; )) o =0 = (G; h ()) L 2(F0) ; 8 2 I, for i = 1; 2; :::; p: 9

10 Corollary 3.1. Assume that the covariance operator de ned in (2.3) is Hilbert- Schmidt and has only positive eigenvalues. Then, the inverse of the asymptotic variance of ^,, has for element (i; j) : ij = (G i ; G j ) L 2 (F 0 ) i; j = 1; :::; q; where G i 2 G i \ L 2 (h) for i = 1; 2; :::; q: Moreover for any G ~ i 2 G i, i = 1; 2; :::; q, the matrix, ; ~ with principal element ~Gi ; G ~ j satis es the property ~ is nonnegative de nite. L 2 (F 0 ) Corollary 3.1 shows that the G i in L 2 (h) corresponds to the worse case (among the G i in G i ), in the sense that they correspond to the largest value of the variance of the estimator. The condition that K does not admit a zero eigenvalue means that K is injective and K 1=2 is well-de ned. In particular, it implies that the moment conditions are linearly independent, i.e. there are no redundant moment conditions. To see this, we write h (; Y t ; 0 )! () (d) = 0 ) E h (; Y t ; ) h (; Y t ; 0 )! () (d) = 0, E h (; Y t ; ) h (; Y t ; 0 )! () (d) = 0, K! = 0 )! = 0: where the third equality is justi ed by Fubini s theorem and the fact that K is Hilbert-Schmidt. Our approach is closely related to that of Hansen (1985), Hansen, Heaton and Ogaki (1988). Hansen (1985) considers the model E = m (X; ) where is the parameter of interest. There is a sequence j ; j = 1; 2; ::: of instruments so that E ( j E) = 0 (3.2) 10

11 for all j = 1; 2; ::: Hansen derives the greatest lower bound for the asymptotic covariance matrices of GMM estimators based on any subsets of moments of type (3.2). We can derive this bound using the tools developed here 4. Indeed, this bound is the variance of the GMM estimator based on all the moment conditions. This is true because the variance of the GMM estimator can never decrease when the number of moments increases. So Formulas (2.5)-(2.6) and Corollary 3.1 provide a way to compute the e ciency bound of GMM estimators based on h (j; ) = j m (X; ), j = 1; 2; :::. When fh (j; )g are martingale di erence sequences (m.d.s.), our method is very similar to that of Hansen (1985), as the condition satis ed by the elements of G i is the same as Hansen s condition (4.8). When fh (j; )g are not m.d.s., both approaches are di erent because Hansen follows Gordin by approximating stationary ergodic processes by m.d.s. Our method is more straightforward to apply. In the rest of the paper, we use Corollary 3.1 to derive conditions on (parametric and semiparametric) e ciency. However, it is worth noting that Corollary 3.1 is interesting in its own right and could be used in other circumstances, for instance to establish the redundancy of moments as in Hall, Inoue, Jana and Shin (2007) Role of the measure The measure appears in the de nition of the covariance operator K. However, from De nition A.2 and Proposition 3.1, we see that does not play a role in the norm kgk K. If the support I of is changed, then di erent moment conditions are used and obviously the norm kgk K is altered, otherwise kgk K is not a ected by the choice of. This is an interesting result because it implies that the asymptotic variance of our estimator does not depend on. Why did we introduce in the rst place? To establish the asymptotic normality of ^, P we need a functional central limit theorem on p 1 T T t=1 h (; Y t; 0 ) in some Hilbert space H. We select H = L 2 (; I) for some so that h (; Y t ; 0 ) belongs to L 2 (; I) : It follows that the de nition of the covariance operator K 4 If j = t j are the lagged values of some r.v., the observations Y t = (X t ; t 1 ; t 2 ; :::) 0 are in nite dimensional and this example does not satisfy our assumption Y t 2 R l. A moditi- cation of our approach to handle in nite dimensional r.v. would be necessary. On the other hand, if j = j for some scalar r.v., then Y t = (X t ; t ) 0 2 R 2 and we can handle this case by setting = j 2 N: 11

12 depends on. By de nition, K is such that * +! hk'; 'i = lim V ar 1 TX p h (:; Y t ; 0 ) ; ' (:) T!1 T for all ' in L 2 (; I). In the simplest case where fh (:; Y t ; 0 )g are uncorrelated, we have V ar (hh (:; Y t ; 0 ) ; ' (:)i) = V ar h (; Y t ; 0 ) ' () (d) = E h (; Y t ; 0 ) ' () (d) h (s; Y t ; 0 )' (s) (ds) = E h (; Y t ; 0 ) h (s; Y t ; 0 ) ' () (d) ' (s) (ds) = hk'; 'i We see that the kernel of the covariance operator k (; s) = E h (; Y t ; 0 ) h (s; Y t ; 0 ) remains unchanged for every. However, the eigenvalues and eigenfunctions of K which are solutions of K j = j j will vary with. In practice, the small sample properties of our estimator may be a ected by the choice of. But, as our paper is about asymptotic e ciency, we will not investigate the role of any further. t=1 4. MLE E ciency of GMM estimators In this section, we investigate the conditions under which the GMM estimator is asymptotically as e cient as MLE. When we say that a GMM estimator is e cient, we mean that its asymptotic variance is equal to the inverse of the Fisher Information matrix. Godambe (1960) shows that the GMM estimator based on the score is e cient. However, this is not the only way for GMM to reach e ciency. It can also be reached using a continuum or a countably in nite number of moment conditions. We assume that the distribution of Y t belongs to a parametric family indexed by, its conditional pdf is denoted f (Y t jy t 1 ; Y t 2 ; :::; Y t p ) or f Y t jy p t 1. F0 corresponds to the case where = 0 ; so that E F 0 is replaced by E 0. We consider arbitrary functions h ; Y p+1 t ; 0 that satisfy E 0 h ; Y p+1 t ; 0 = 0 where Y p+1 t is the (p + 1) vector of r.v., 12

13 Y p+1 t = (Y t ; Y t 1 ; :::; Y t p ) 0. We want to provide a necessary and su cient condition on h for the GMM estimator to be asymptotically e cient. To answer this question, it su ces to compare the asymptotic variance given in Corollary 3.1 with the inverse of Fisher s information matrix. Assumption 4.1. (i) Y t is a stationary ergodic Markov chain of order p. (ii) Let s t ln f t jy p t 1 =0 : The information matrix E 0 [s t s 0 t] exists and is positive. 2T +1 (iii) (di erentiation under the integral sign) Let a yt ; = h ; y p+1 t ; f y 2T +1 a yt ; is continuously di erentiable in a neighorhood N of 0 and 2T +1 sup r a yt ; 2T +1 dyt < 1: 2N Proposition 4.1. Assume that Assumption 4.1 and Equations (2.5) and (2.6) hold. (i) The inverse of the asymptotic variance of ^, the GMM estimator based on h ; Y p+1 t ; ; 2 I is given by = (Proj; Proj) L 2 (F 0 ) where Proj is the orthogonal projection (in L 2 (F 0 )) of s t on L 2 (h) and L 2 (h) the closure of the span of h ; Y p+1 t ; 0 ; 2 I : (ii) Moreover, coincides with Fisher s information matrix if and only if where s ti is the ith element of s t. s ti 2 L 2 (h) for all i = 1; 2; :::; q (4.1) In the case where I is nite and hence h t h ; Y p+1 t ; 0 ; 2 I is a vector, the projection of s t on L 2 (h) is equal to and becomes Proj = h t (h; h) 1 L 2 (F 0 ) (h; s) L 2 (F 0 ) = (h; s) 0 1 L 2 (F 0 ) (h; h) L 2 (F 0 ) (h; h) 1 L 2 (F 0 ) (h; h) L 2 (F 0 ) (h; s) L 2 (F 0 ) = (h; s) 0 1 L 2 (F 0 ) (h; h) L 2 (F 0 ) (h; s) L 2 (F 0 ) = X j E 0 (s t j h 0 t) X! 1 X E 0 h t h 0 t j E 0 (h t s t j ) : j 13 j 2T +1 T.

14 We recognize the expression of the inverse of the asymptotic covariance matrix of the GMM estimator characterized by Assumption 2 and Theorem 1(a) of Hall, Inoui, Jana, and Shin (2007), for a nite number of moment conditions. Note that MLE e ciency is obtained only for the optimal GMM estimator, that is, if the optimal weighting matrix/operator is used. A similar result (for a countable sequence of moments) is discussed by Tauchen (1997) in the static case, and by Gallant and Long (1997) in the context of moment conditions based on the score of a semi-nonparametric auxiliary model, in the dynamic case. Bates and White (1993) give conditions for an estimator to have minimum asymptotic variance in a class of estimators and show that the GMM estimator has minimum variance in the class of minimum discrepancy estimators. Davidson (1993) gives a way to construct e cient estimators by projection. The proof of Proposition 4.1 uses the results of Corollary 3.1. It is proved in the general case where h ; Y p+1 t ; is autocorrelated. The key of the proof consists in showing that the score s ti belongs to G i. In the independent case, this is equivalent to showing ; Y p+1 t ;! = cov 0 s ti ; h ; Y p+1 t ; 0 : i =0 This equation can be obtained by di erentiating E h ; Y p+1 t ; = 0 with respect to : Godambe (1960) proves the e ciency of MLE in the class of MM estimators, by applying Cauchy-Schwartz to Equation (4.2). The following corollary to Proposition 4.1 gives a su cient condition for e - ciency. L 2 (X) denotes the space of functions of Y p+1 t that are square integrable. Corollary 4.2. Assume that Assumption 4.1 holds. If fh () : 2 Ig spans L 2 (X) (equivalently L 2 (h) = L 2 (X)) then the GMM estimator based on h ; Y p+1 t ; is asymptotically e cient. L 2 (X) is a separable Hilbert space and hence by de nition can be approximated by a countable sequence of functions. Note that this sequence is not unique. The statement fh () : 2 Ig spans L 2 (X) is equivalent to say that the family of moment functions fh () : 2 Ig is complete in L 2 (X) : The separability property has been exploited in various articles (see Chamberlain (1987), Gallant and Tauchen (1996)) to show that e ciency can be approached by using a sequence of moment conditions. Below, we provide three examples where the GMM estimator 14

15 is as e cient as the MLE. They illustrate the fact that e ciency can be reached either from an in nite countable sequence or a continuum of moment conditions. Example 1. (Power functions) Assume that the data are iid with distribution for which the moment generating function exists. One can estimate by using a countably in nite number of moments indexed by h (; Y; ) = Y E (Y ) ; = 1; 2; ::: The GMM estimator will be e cient if the polynomials are dense in L 2 (Y ) : From Krein (1945), we know that the polynomials are dense in L 2 (Y ) if either (i) Y has bounded support or (ii) the density f 0 of Y satis es ln f0 (y) dy = 1: y The moments E (Y ) are usually unknown and it may be necessary to estimate them by simulations. The simulated method of moments (see Gourieroux and Monfort, 1996 and the survey by Carrasco and Florens, 2002) consists in matching sample moments with moments estimated from simulated data. As the simulated method of moments is asymptotically equivalent to the usual GMM when the number of simulations goes to in nity, our results also apply to these estimators. Example 2. (E cient Method of Moments) Now assume that fy t g is a stationary weakly dependent process: The indirect inference method proposed by Gourieroux, Monfort, and Renault (1993) and the E cient Method of Moments (EMM) proposed by Gallant and Tauchen (1996) are methods designed to handle cases for which the MLE is intractable, these methods use an auxiliary model (easy to estimate) indexed by a parameter : The auxiliary model de nes a conditional pdf f M (Y t jy t 1 ; :::; Y t p ; ) where M denotes the dimension of : Denote ( 0 ) the pseudo-true value of (the limit of the QMLE of ) and denote h Mt the pseudo score: h Mt ( 0 ) ln f M (Y t jy t 1 ; :::; Y t p ; ( 0 )) The asymptotic variance of p T ^ 0 is given by 8 < : 0 " 1 X j= 1 # 1 E F 0 h Mt h 0 Mt j E 9 = ; 1 : 15

16 By Proposition 4.1, e ciency is achieved if the DGP score belongs to the closure of the set fh Mt g : This will be satis ed in general if the auxiliary model is based on a seminonparametric model using a Hermite expansion suggested by Gallant and Tauchen (1996). Gallant and Long (1997) show that the variance of the EMM estimator converges to the Cramer Rao e ciency bound when M and p go to in nity (see also Gallant and Nychka 1987). Calzolari, Fiorentini, and Sentana (2004) give a condition for the e ciency of the Constrained Indirect inference estimator in the case where the ( nitely many) moment functions are not necessarily martingale di erence sequences. Our result generalizes theirs to the case of an in nite number of moment conditions. Example 3. (Characteristic function) Consider moment functions based on the joint characteristic function of Y p+1 h ; Y p+1 t ; = e 0 Y p+1 t E h e 0 Y p+1 t t : i : (4.3) or the conditional characteristic function h ; Y p+1 t ; = e is0 Y p t 1 e iry t E e iryt jy p t 1 (4.4) where = (s 0 ; r) 0 : In both cases, a continuum of moment conditions is available. Corollary 4.2 implies that GMM estimators based on either moment functions are asymptotically e cient. This result was shown by Singleton (2001) for (4.4) but is new for (4.3) 5. Note that the rst set of moment conditions are autocorrelated unless the data are iid. On the other hand, the moment conditions (4.4) are martingale di erence sequences and hence are uncorrelated. Therefore the second set of moment conditions may be easier to handle than the rst one. If the conditional characteristic function is unknown, it may be di cult to estimate it via simulations, as it would require drawing from the conditional distribution, which is not always possible. On the other hand, simulations of the joint characteristic function is easy to perform as discussed in Carrasco, Chernov, Florens, and Ghysels (2007). Applications of the characteristic function to the estimation of di usions are given by Singleton (2001) and Carrasco, Chernov, Florens, and Ghysels (2007) among others. 5 Feuerverger (1990) claims the result but his proof is heuristic. 16

17 5. Semiparametric e ciency bound for conditional restrictions In an i.i.d. setting, Chamberlain (1987) shows that the asymptotic variance of the GMM estimator provides an e ciency bound in a much broader sense, i.e., it is the lower bound for the asymptotic variance of any consistent, asymptotically normal estimator in a class of nonparametric models, where the only restriction imposed on the distribution is a set of moment conditions. In this section, we extend this result to a time-series context. Here, we do not assume a speci c parametric model. The only information we have, is: A1. The observations are given by (Y p+1 ; Y p+2 ; :::; Y T +1 ) where Y t 2 R is a stationary ergodic Markov chain of order p: Let X t = (Y t ; :::; Y t p+1 ) be the lagged values of Y t. The transition distribution from X t to Y t+1 is unknown and denoted by Q (x; dy). X t has a nondegenerate invariant distribution denoted. A2. The model is now characterized by a conditional moment restriction: E ( (X t ; Y t+1 ; 0 ) jx t ) = 0: A3. : R p+1! R m is a continuous function of X t ; Y t+1 and is twice continuously di erentiable with respect to. Denote by 5 (X t ; Y t+1 ; 0 ) the q m matrix with (i; j) j (X t ; Y t+1 ; i evaluated at = 0, where j (X t ; Y t+1 ; ) is the jth element of (X t ; Y t+1 ; ). 5 (X t ; Y t+1 ; 0 ) is bounded away from zero almost-surely. Moreover, I 0 de ned by I 0 E h E (5 (X t ; Y t+1 ; 0 ) jx t ) E (X t ; Y t+1 ; 0 ) (X t ; Y t+1 ; 0 ) 0 jx t 1 E (5 (X t ; Y t+1 ; 0 ) jx t )] exists and is positive. First, we recall the e ciency bound in the class of GMM estimators and we explain how to construct an estimator that reaches this bound. Second, we show that this bound coincides with the semiparametric e ciency bound Bound in the class of GMM estimators In this section, we are interested in the estimators of with minimal variance in the class of GMM estimators. The GMM estimator with minimal variance will be called optimal. It is known from Godambe (1985) and Hansen, Heaton, and 17

18 Ogaki (1988, p. 867) that the optimal GMM estimator is obtained by solving the exactly identi ed system of equations: where TX h (X t ; Y t+1 ; ) = 0 (5.1) t=1 and the optimal instrument satis es h (X t ; Y t+1 ; ) = A (X t ) (X t ; Y t+1 ; ) A (X t ) = E (5 (X t ; Y t+1 ; 0 ) jx t ) E (X t ; Y t+1 ; 0 ) (X t ; Y t+1 ; 0 ) 0 jx t 1 : (5.2) Then, the asymptotic variance is given by E h (X t ; Y t+1 ; 0 ) h (X t ; Y t+1 ; 0 ) 0 1 (5.3) h = E E (5 (X t ; Y t+1 ; 0 ) jx t ) E (X t ; Y t+1 ; 0 ) (X t ; Y t+1 ; 0 ) 0 1 jx t = I 1 0. E (5 (X t ; Y t+1 ; 0 ) jx t )] The unknown A could be replaced by a consistent nonparametric estimator to obtain a feasible estimator of. The resulting estimator would be also e cient (see Newey, 1993, in the iid case and Wefelmeyer, 1996 for a dynamic example). However, estimating the conditional expectations in (5.2) may be burdensome. We show that this bound can be attained by using a set of instruments, A; which is dense in ~C (R) = {v continuous functions of R p into Rg. Proposition 5.1. Let A be a set of instruments A in L 2 (X t ). If the closed linear span of A denoted A contains ~C (R), then the GMM estimator based on the moments is optimal. h (; X t ; Y t+1 ; ) = A (X t ; Y t+1 ; ), for all A 2 A The proof of this result is immediate. As the asymptotic variance of the GMM estimator decreases when adding more moment conditions, the e ciency bound (in the GMM class) is reached by the estimator that uses the richest class of 18 1

19 instruments. Although the optimal instrument de ned in (5.2) can not be written as a nite linear combinations of elements of A, it belongs to A. Now we discuss how to choose the instruments so that they span ~C (R) : In the case where X t has bounded support, Bierens (1990) shows that A = fexp ( 0 X t ) : 2 I R p g satis es this condition. Stinchcombe and White (1998) show that many other functions, A; can be used. Both papers are concerned with consistent speci cation testing and not with estimation. If X t has bounded support, one can also use power functions of X t as instruments A = fx t ; = 1; 2; :::g. The following lemma shows how to construct a continuum of moment conditions that will deliver e ciency. Lemma 5.2. Let A = a (X t ) 0 : 2 I R p, where I is an arbitrary hypercube of R containing (0; :::; 0), (X t ) is a bounded, continuous, one-to-one mapping of X t ; and a is in nitely di erentiable around 0 with a (l) (0) 6= 0 for l = 1; 2; :::: Thus A spans ~C (R) : Examples of functions a satisfying the requirements of Lemma 5.2 are the exponential function, the logistic cumulative distribution function (cdf), and the noncentered normal cdf and density, among others. Note that Lemma 5.2 does not require X t to have bounded support. If X t has bounded support then it su ces to take (x) = x; otherwise a candidate for is (x 1 ; :::; x p ) = (tan 1 (x 1 ) ; :::; tan 1 (x p )) 0 : Observe also that it is not necessary to take all in R p, it su ces to take all in an arbitrary small interval around Bound in the class of regular estimators In this section, we show that the bound (5.3) corresponds to the e ciency bound not just in the class of GMM estimators but also in the much larger class of regular estimators, and hence can be called semiparametric e ciency bound. We adopt the usual approach for establishing the semiparametric e ciency as described in Bickel, Klaassen, Ritov, and Wellner (1993). It consists in constructing local parametric submodels for which the MLE is asymptotically normal. Among these submodels, we isolate the worse case scenario leading to an MLE with maximal variance. Then using the convolution theorem, we show that this variance gives a bound. Although the concept of local asymptotic normality and the convolution theorem were initially introduced in an iid context, they have been extended to time-series in a natural way, see various papers by Wefelmeyer (quoted in the recent survey by Greenwood, Müller, and Wefelmeyer (2004)) and McNeney and 19

20 Wellner (2000). Komunjer and Vuong (2008) derive the semiparametric e ciency bound for quantile models in time-series. Instead of appealing to the convolution theorem, they construct global (instead of local) parametric models that encompass the true model and nd the least favorable among these models. We could have adopted their approach but the proof would have been very di erent and more cumbersome than the present one. To the best of our knowledge, the e ciency in the model de ned by A2 has not been considered yet. Condition A2 includes as special case the conditional moment restrictions of Wefelmeyer (1996). Our proof follows closely that of Theorem 1 in Wefelmeyer (1996). We adapt it to our more general setting by allowing to be nonlinear in y and the dimensions of and to be larger than 1. Note that the condition A2 can be rewritten as (x; y; 0 ) Q (x; dy) = 0 m : (5.4) First we must construct a local submodel. Let G be the set of bounded functions G (x; y) that take their values in R q (where q is the dimension of ) such that G (x; y) Q (x; dy) = 0 q ; (5.5) G (x; y) (x; y; 0 ) 0 Q (x; dy) = 5 (x; y; 0 ) Q (x; dy) : (5.6) For G 2 G and a constant u 2 R q, we construct a transition distribution Q T ug such that (5.4) holds for Q = Q T ug and = 0 + T 1=2 u. Proposition 5.1. A local submodel at Q is given by the transition distribution Q T ug (x; dy) = 1 + T 1=2 (G (x; y) + r T (x; y)) 0 u Q (x; dy) where r T (x; y) is a vector of R q of order T 1=2 de ned by Equations (B.6) and (B.8) in Appendix. Let P T denote the joint distribution of (Y p+1 ; Y p+2 ; :::; Y T +1 ) if Q is true and PT T ug if Q T ug is true. The family PT T ug is a local model at Q. It includes the true model for u = 0: By a Taylor expansion around u = 0, we have 20

21 log dp T ug T dp T = TX ln 1 + T 1=2 (G (X t ; Y t+1 ) + r T (X t ; Y t+1 )) 0 u t=1 TX = T 1=2 (G t + r T t ) 0 u t=1 TX = T 1=2 G 0 tu t=1 1 2 T 1 u T 1 TX u 0 (G t + r T t ) (G t + r T t ) 0 u + O PT t=1! TX G t G 0 t u + o PT (1) t=1 T 1 using the notation G t = G (X t ; Y t+1 ) and r T t = r T (X t ; Y t+1 ). As a result of the law of large numbers and local asymptotic normality (LAN) of Markov chains (Roussas, 1965), we have T 1 T 1=2 TX G t G 0 P t! E (GG 0 ) under P T ; t=1 TX G d t! N (0; E (GG 0 )) under P T ; t=1 where E (:) denotes the expectation with respect to the invariant joint distribution of (X t ; Y t+1 ), denoted Q. Estimating u in the local model corresponds to estimating in the original model. The true value of u is zero. The MLE of u in PT T ug has the property that ^u! d N 0; E (GG 0 ) 1 under P T and the least favorable submodel corresponds to the G that maximizes E (GG 0 ) 1 or equivalently minimizes E (GG 0 ). Let G be the closure of G in L 2 ( Q). The e cient score function s 2 G at Q is such that The information bound at Q is s = arg min E (GG 0 ) : (5.7) G2G E (ss 0 ) : Following Wefelmeyer (1996), we restrict our attention to the following class of regular estimators. 21

22 De nition 5.1. An estimator ^ is called regular at Q with limit L if for all G 2 G and u 2 R q, p T ^ 0 T 1=2 d u! L under PT T ug where L does not depend on G and u: By the convolution theorem (Hajek, 1970), we have L = M + N where M is independent of N and N is a normal with mean zero and variance E (ss 0 ) 1. Hence, for any bowl-shaped loss functions (see Van der Vaart, 1998, p. 113), an estimator of 0 is e cient if its limit under P T is N. This permits us to establish the following result. Proposition 5.2. Consider the class of regular estimators de ned in De nition 5.1. Let A be as in (5.2). The e cient score at Q is given by s (x; y) = A (x) (x; y; 0 ) and the semiparametric e ciency bound is positive and equals (5.3). Proposition 5.2 states that the optimal GMM estimator based on moments (5.1) is asymptotically e cient in the class of regular estimators. This generalizes the results of Chamberlain (1987) to a dynamic context, Hansen (1993) to nonlinear models, and Wefelmeyer (1996) to nonlinear moments. The proof of Proposition 5.2 (given in Appendix) consists in showing that s 2 G and s satis es (5.7). The semiparametric e ciency bound coincides with the GMM bound (5.3) for the following reason. According to Corollary 3.1., a GMM estimator based on moments A has an asymptotic variance equal to the inverse of E (GG 0 ) where G satis es E (G) = 0 and E (Ar ) = E (GA) : (5.8) Note that if A is complete, then (5.8) is equivalent to E (r jx) = E (Gjx) : So for the choice of A described in Proposition 5.1, we have equivalence between (5.8) and (5.6). And the asymptotic variances coincide. Moreover, the semiparametric e ciency bound is attainable by GMM based on the moments discussed in Proposition 5.1 and Lemma

23 6. Conclusion We started this paper by characterizing the asymptotic variance of GMM estimators in terms of an inner product in a RKHS. Then, we use this characterization to derive two important results on e ciency. The two main contributions of the paper are: a) Give a necessary and su cient condition for the GMM estimator to be as e cient MLE. b) Derive the semiparametric e ciency bound in a class of regular estimators where the only knowledge on the process is that (i) it is Markov, (ii) it satis es a conditional moment condition. It turns out that this bound coincides with the GMM bound as it is true for iid data. This paper sheds new light on how to reach e ciency. One way is to use the true score (in parametric models, Section 4) or the e cient score (in semiparametric models, Section 5). The other way is to use a set of an in nity (countable or not) of moment restrictions that is su ciently rich to encompass the score or e cient score. When the score is unknown or intractable, the only way for GMM to be as e cient as MLE is to use an in nity of moment conditions, which boils down to nonparametrically estimating the unknown score. The same is true for semiparametric models, the econometricians concerned with e ciency usually nonparametrically estimate the optimal instrument (Newey, 1993). A less wellknown way to reach the same e ciency is to use an in nity of moment restrictions, for instance a continuum of moments based on the exponential function. While the second approach does not require any nonparametric procedure, it still requires the introduction of a smoothing parameter to be able to invert the in nitely dimensional covariance operator. It should be noted that although the focus of our paper is on GMM, some of our e ciency results apply to other estimators. Recently, there has been an increasing interest in alternative methods to GMM based on the entropy, like the continuous updating estimator (Hansen, Heaton, and Yaron, 1996); the empirical likelihood estimator (Imbens, 1997); and the exponential tilting estimator (Kitamura and Stutzer, 1997). These estimators enter into the category of the generalized empirical likelihood (GEL) estimators. Newey and Smith (2004) show that the GEL estimators have better small sample properties than the GMM estimator and therefore may be preferable to GMM. Remark that the GEL estimators have the same asymptotic variance as the optimal GMM estimator based on the same moment conditions. Thus the characterization of the asymptotic variance 23

24 and the e ciency results we gave here also apply to GEL estimators. Throughout the paper, we assumed that there was no nuisance parameter. Allowing for the presence of (possibly in nitely dimensional) nuisance parameters could be done along the line of Newey (2004). We leave this extension for future research. 24

25 A. Useful de nitions and results on operators For a review on operators and reproducing kernel Hilbert space (RKHS), see Saitoh (1997), Berlinet and Thomas-Agnan (2004), and Carrasco, Florens, and Renault (2007). Here, we recall only the basic results needed in this paper. De nition A.1. A function k : I I! C is called a positive type function if nx i=1 nx a i a j k ( i ; j ) 0 j=1 for any integer n; (a 1 ; :::; a n ) 2 C n and ( 1 ; :::; n ) 2 I n. There are various de nitions of the RKHS associated with k. De nition A.2. Let k : I I! C be a positive type function. Then, H (K) the RKHS associated with k is de ned as the Hilbert space of functions f on I with inner product (; ) K that satis es the properties: (i) k (:; ) 2 H (K) ; (ii) (reproducing property) f () = (f; k (:; )) K : Remark 1. If k is a covariance kernel then it is necessarily of a positive type. Remark 2. The reproducing property implies that fk (:; ) : 2 Ig spans H (K). Indeed (f; k (:; )) K = 0 for all 2 I ) f = 0. Remark 3. The de nition A.2 has been written without reference to an integral operator and for that matter to a measure. It means that H (K) is actually independent of the choice of introduced in Section 2. In the GMM setting, we use the RKHS machinery in connection with a covariance operator. Recall that the space L 2 (I; ) with inner product h; i and norm k:k has been de ned in Section 2. Consider an integral operator K de ned as K : L 2 (I; )! L 2 (I; ) f! g () = k (; ) f () (d) ; I The operator K is self-adjoint i.e. hkf; gi = hf; Kgi, which is satis ed if k (; ) = k (; ). K is said to be a Hilbert-Schmidt operator if jk (; )j 2 (d) (d) < 1: 25

26 A consequence is that the operator K has a countable spectrum. Let j ; j ; j = 1; 2; :::be the eigenfunctions and eigenvalues of K: An alternative characterization of the RKHS in terms of the spectral decomposition of K is the following. De nition A.3. The RKHS associated with a self-adjoint Hilbert-Schmidt operator K on L 2 (I; ) is de ned as the Hilbert space: ( H (K) = f 2 L 2 (I; ) j X ) f; j 2 < 1 j j with inner product (f; g) K = X j f; j g; j j : where the convention 1=0 = 0 is being adopted if some of the j are equal to zero. Remark 4. A covariance operator is necessarily self-adjoint. Remark 5. H (K) coincides with the range of K 1=2. Moreover, if 0 is not an eigenvalue of K, then (f; g) K = K 1=2 f; K 1=2 g : Remark 6. The eigenvalues j and eigenfunctions j depend on the measure : In spite of this, both de nitions A.2 and A.3 de ne the same space and the same inner product, provided k is square integrable with respect to. A representation of the inner product in H (K) that is independent of the arbitrary measure would be preferable. It is given in Proposition 3.1. B. Proofs Proof of Proposition 3.1. Proof of (a) - It su ces to show that there exists a correspondence J that satis es the conditions. We are going to construct such a correspondence. As mentioned in Appendix A, the reproducing property implies that H (K) is spanned by fk (:; ) : 2 Ig : Hence, each element in H (K) has a representation nx g n = j k (:; j ) j=1 for some integer n, real constants 1 ; :::; n, and points 1 ; :::; n in I, or its limit in norm denoted g = l:i:n:g n : It is useful to introduce the notation L (k (:; ) : 2 I) 26

27 for the space of all elements g n i.e. the space of all nite linear combinations of elements k (:; ) ; 2 I: On the other hand, each element of L 2 (h) can be written as G n = nx j h ( j ) j=1 for some integer n, real 1 ; :::; n, and points 1 ; :::; n in I, or its limit in mean square denoted G = l:i:m:g n : Again, it is convenient to de ne L (h () : 2 I) as the space of all nite linear combinations of elements h () ; 2 I: We construct the correspondence J between L (k (:; ) : 2 I) and L (h () : 2 I) such that J J (k (:; j )) = h ( j ) ;! nx nx j k (:; j ) = j h ( j ) j=1 j=1 for any integer n, real 1 ; :::; n, and points 1 ; :::; n in I. We want to show that this correspondance is one-to-one, linear, and preserves the inner product. (i) J is one-to-one: Let g n = P n j=1 jk (:; j ) and J (g n ) = P n j=1 jh ( j ). We need to show that two di erent representations of an element g n = nx j k (:; j ) = j=1 nx 0 jk :; 0 j j=1 leads to the same value J (g n ) = nx j h ( j ) = j=1 nx 0 jh j 0 : j=1 Denote nx nx nx i k (:; i ) j k (:; j ) 0 jk :; j 0 : i=1 j=1 j=1 Then it su ces to show that nx nx i k (:; i ) = 0 ) i h ( i ) = 0: i=1 i=1 27

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill November

More information

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

Introduction: structural econometrics. Jean-Marc Robin

Introduction: structural econometrics. Jean-Marc Robin Introduction: structural econometrics Jean-Marc Robin Abstract 1. Descriptive vs structural models 2. Correlation is not causality a. Simultaneity b. Heterogeneity c. Selectivity Descriptive models Consider

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

Simple Estimators for Monotone Index Models

Simple Estimators for Monotone Index Models Simple Estimators for Monotone Index Models Hyungtaik Ahn Dongguk University, Hidehiko Ichimura University College London, James L. Powell University of California, Berkeley (powell@econ.berkeley.edu)

More information

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries 1. Introduction In this paper we reconsider the problem of axiomatizing scoring rules. Early results on this problem are due to Smith (1973) and Young (1975). They characterized social welfare and social

More information

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen

More information

Economics 620, Lecture 18: Nonlinear Models

Economics 620, Lecture 18: Nonlinear Models Economics 620, Lecture 18: Nonlinear Models Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 18: Nonlinear Models 1 / 18 The basic point is that smooth nonlinear

More information

Simple Estimators for Semiparametric Multinomial Choice Models

Simple Estimators for Semiparametric Multinomial Choice Models Simple Estimators for Semiparametric Multinomial Choice Models James L. Powell and Paul A. Ruud University of California, Berkeley March 2008 Preliminary and Incomplete Comments Welcome Abstract This paper

More information

13 Endogeneity and Nonparametric IV

13 Endogeneity and Nonparametric IV 13 Endogeneity and Nonparametric IV 13.1 Nonparametric Endogeneity A nonparametric IV equation is Y i = g (X i ) + e i (1) E (e i j i ) = 0 In this model, some elements of X i are potentially endogenous,

More information

GMM estimation of spatial panels

GMM estimation of spatial panels MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria SOLUTION TO FINAL EXAM Friday, April 12, 2013. From 9:00-12:00 (3 hours) INSTRUCTIONS:

More information

Chapter 1. GMM: Basic Concepts

Chapter 1. GMM: Basic Concepts Chapter 1. GMM: Basic Concepts Contents 1 Motivating Examples 1 1.1 Instrumental variable estimator....................... 1 1.2 Estimating parameters in monetary policy rules.............. 2 1.3 Estimating

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Chapter 2. Dynamic panel data models

Chapter 2. Dynamic panel data models Chapter 2. Dynamic panel data models School of Economics and Management - University of Geneva Christophe Hurlin, Université of Orléans University of Orléans April 2018 C. Hurlin (University of Orléans)

More information

Notes on Generalized Method of Moments Estimation

Notes on Generalized Method of Moments Estimation Notes on Generalized Method of Moments Estimation c Bronwyn H. Hall March 1996 (revised February 1999) 1. Introduction These notes are a non-technical introduction to the method of estimation popularized

More information

Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770

Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770 Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 77 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill October 4, 2 MATHEMATICAL

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

8 Periodic Linear Di erential Equations - Floquet Theory

8 Periodic Linear Di erential Equations - Floquet Theory 8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions

More information

Testing for Regime Switching: A Comment

Testing for Regime Switching: A Comment Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara

More information

Spread, estimators and nuisance parameters

Spread, estimators and nuisance parameters Bernoulli 3(3), 1997, 323±328 Spread, estimators and nuisance parameters EDWIN. VAN DEN HEUVEL and CHIS A.J. KLAASSEN Department of Mathematics and Institute for Business and Industrial Statistics, University

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications Yongmiao Hong Department of Economics & Department of Statistical Sciences Cornell University Spring 2019 Time and uncertainty

More information

Widely applicable periodicity results for higher order di erence equations

Widely applicable periodicity results for higher order di erence equations Widely applicable periodicity results for higher order di erence equations István Gy½ori, László Horváth Department of Mathematics University of Pannonia 800 Veszprém, Egyetem u. 10., Hungary E-mail: gyori@almos.uni-pannon.hu

More information

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Ryan Greenaway-McGrevy y Bureau of Economic Analysis Chirok Han Korea University February 7, 202 Donggyu Sul University

More information

Measuring robustness

Measuring robustness Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely

More information

Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity

Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity Songnian Chen a, Xun Lu a, Xianbo Zhou b and Yahong Zhou c a Department of Economics, Hong Kong University

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

A CONSISTENT SPECIFICATION TEST FOR MODELS DEFINED BY CONDITIONAL MOMENT RESTRICTIONS. Manuel A. Domínguez and Ignacio N. Lobato 1

A CONSISTENT SPECIFICATION TEST FOR MODELS DEFINED BY CONDITIONAL MOMENT RESTRICTIONS. Manuel A. Domínguez and Ignacio N. Lobato 1 Working Paper 06-41 Economics Series 11 June 2006 Departamento de Economía Universidad Carlos III de Madrid Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624 98 75 A CONSISTENT SPECIFICATION TEST

More information

Likelihood Ratio Based Test for the Exogeneity and the Relevance of Instrumental Variables

Likelihood Ratio Based Test for the Exogeneity and the Relevance of Instrumental Variables Likelihood Ratio Based est for the Exogeneity and the Relevance of Instrumental Variables Dukpa Kim y Yoonseok Lee z September [under revision] Abstract his paper develops a test for the exogeneity and

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor guirregabiria SOLUTION TO FINL EXM Monday, pril 14, 2014. From 9:00am-12:00pm (3 hours) INSTRUCTIONS:

More information

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587 Cycles of length two in monotonic models José Alvaro Rodrigues-Neto Research School of Economics, Australian National University ANU Working Papers in Economics and Econometrics # 587 October 20122 JEL:

More information

Bayesian Modeling of Conditional Distributions

Bayesian Modeling of Conditional Distributions Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings

More information

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0. Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

A Course on Advanced Econometrics

A Course on Advanced Econometrics A Course on Advanced Econometrics Yongmiao Hong The Ernest S. Liu Professor of Economics & International Studies Cornell University Course Introduction: Modern economies are full of uncertainties and risk.

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

FOCK SPACE TECHNIQUES IN TENSOR ALGEBRAS OF DIRECTED GRAPHS

FOCK SPACE TECHNIQUES IN TENSOR ALGEBRAS OF DIRECTED GRAPHS FOCK SPACE TECHNIQUES IN TENSOR ALGEBRAS OF DIRECTED GRAPHS ALVARO ARIAS Abstract. In [MS], Muhly and Solel developed a theory of tensor algebras over C - correspondences that extends the model theory

More information

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within

More information

ECON0702: Mathematical Methods in Economics

ECON0702: Mathematical Methods in Economics ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 12, 2009 Luo, Y. (SEF of HKU) MME January 12, 2009 1 / 35 Course Outline Economics: The study of the choices people (consumers,

More information

Efficiency of Profile/Partial Likelihood in the Cox Model

Efficiency of Profile/Partial Likelihood in the Cox Model Efficiency of Profile/Partial Likelihood in the Cox Model Yuichi Hirose School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, New Zealand Summary. This paper shows

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data Song Xi CHEN Guanghua School of Management and Center for Statistical Science, Peking University Department

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Microeconomic Theory-I Washington State University Midterm Exam #1 - Answer key. Fall 2016

Microeconomic Theory-I Washington State University Midterm Exam #1 - Answer key. Fall 2016 Microeconomic Theory-I Washington State University Midterm Exam # - Answer key Fall 06. [Checking properties of preference relations]. Consider the following preference relation de ned in the positive

More information

Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels.

Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels. Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels. Pedro Albarran y Raquel Carrasco z Jesus M. Carro x June 2014 Preliminary and Incomplete Abstract This paper presents and evaluates

More information

Approximately Most Powerful Tests for Moment Inequalities

Approximately Most Powerful Tests for Moment Inequalities Approximately Most Powerful Tests for Moment Inequalities Richard C. Chiburis Department of Economics, Princeton University September 26, 2008 Abstract The existing literature on testing moment inequalities

More information

MC3: Econometric Theory and Methods. Course Notes 4

MC3: Econometric Theory and Methods. Course Notes 4 University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew

More information

Microeconometrics: Clustering. Ethan Kaplan

Microeconometrics: Clustering. Ethan Kaplan Microeconometrics: Clustering Ethan Kaplan Gauss Markov ssumptions OLS is minimum variance unbiased (MVUE) if Linear Model: Y i = X i + i E ( i jx i ) = V ( i jx i ) = 2 < cov i ; j = Normally distributed

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference

Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Nonparametric Identi cation of a Binary Random Factor in Cross Section Data

Nonparametric Identi cation of a Binary Random Factor in Cross Section Data Nonparametric Identi cation of a Binary Random Factor in Cross Section Data Yingying Dong and Arthur Lewbel California State University Fullerton and Boston College Original January 2009, revised March

More information

Estimation and Inference with Weak, Semi-strong, and Strong Identi cation

Estimation and Inference with Weak, Semi-strong, and Strong Identi cation Estimation and Inference with Weak, Semi-strong, and Strong Identi cation Donald W. K. Andrews Cowles Foundation Yale University Xu Cheng Department of Economics University of Pennsylvania This Version:

More information

Introduction to the Mathematical and Statistical Foundations of Econometrics Herman J. Bierens Pennsylvania State University

Introduction to the Mathematical and Statistical Foundations of Econometrics Herman J. Bierens Pennsylvania State University Introduction to the Mathematical and Statistical Foundations of Econometrics 1 Herman J. Bierens Pennsylvania State University November 13, 2003 Revised: March 15, 2004 2 Contents Preface Chapter 1: Probability

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

CAE Working Paper # A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests. Nicholas M. Kiefer and Timothy J.

CAE Working Paper # A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests. Nicholas M. Kiefer and Timothy J. CAE Working Paper #05-08 A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests by Nicholas M. Kiefer and Timothy J. Vogelsang January 2005. A New Asymptotic Theory for Heteroskedasticity-Autocorrelation

More information

Supplemental Material 1 for On Optimal Inference in the Linear IV Model

Supplemental Material 1 for On Optimal Inference in the Linear IV Model Supplemental Material 1 for On Optimal Inference in the Linear IV Model Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Vadim Marmer Vancouver School of Economics University

More information

Nonparametric Estimation of Wages and Labor Force Participation

Nonparametric Estimation of Wages and Labor Force Participation Nonparametric Estimation of Wages Labor Force Participation John Pepper University of Virginia Steven Stern University of Virginia May 5, 000 Preliminary Draft - Comments Welcome Abstract. Model Let y

More information

Stochastic Processes (Master degree in Engineering) Franco Flandoli

Stochastic Processes (Master degree in Engineering) Franco Flandoli Stochastic Processes (Master degree in Engineering) Franco Flandoli Contents Preface v Chapter. Preliminaries of Probability. Transformation of densities. About covariance matrices 3 3. Gaussian vectors

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Economics 620, Lecture 20: Generalized Method of Moment (GMM)

Economics 620, Lecture 20: Generalized Method of Moment (GMM) Economics 620, Lecture 20: Generalized Method of Moment (GMM) Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 20: GMM 1 / 16 Key: Set sample moments equal to theoretical

More information

Chapter 2. GMM: Estimating Rational Expectations Models

Chapter 2. GMM: Estimating Rational Expectations Models Chapter 2. GMM: Estimating Rational Expectations Models Contents 1 Introduction 1 2 Step 1: Solve the model and obtain Euler equations 2 3 Step 2: Formulate moment restrictions 3 4 Step 3: Estimation and

More information

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With

More information

THE FOURTH-ORDER BESSEL-TYPE DIFFERENTIAL EQUATION

THE FOURTH-ORDER BESSEL-TYPE DIFFERENTIAL EQUATION THE FOURTH-ORDER BESSEL-TYPE DIFFERENTIAL EQUATION JYOTI DAS, W.N. EVERITT, D.B. HINTON, L.L. LITTLEJOHN, AND C. MARKETT Abstract. The Bessel-type functions, structured as extensions of the classical Bessel

More information

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1815 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS

More information

CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27

CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27 CCP Estimation Robert A. Miller Dynamic Discrete Choice March 2018 Miller Dynamic Discrete Choice) cemmap 6 March 2018 1 / 27 Criteria for Evaluating Estimators General principles to apply when assessing

More information

ESTIMATION AND INFERENCE WITH WEAK, SEMI-STRONG, AND STRONG IDENTIFICATION. Donald W. K. Andrews and Xu Cheng. October 2010 Revised July 2011

ESTIMATION AND INFERENCE WITH WEAK, SEMI-STRONG, AND STRONG IDENTIFICATION. Donald W. K. Andrews and Xu Cheng. October 2010 Revised July 2011 ESTIMATION AND INFERENCE WITH WEAK, SEMI-STRONG, AND STRONG IDENTIFICATION By Donald W. K. Andrews and Xu Cheng October 1 Revised July 11 COWLES FOUNDATION DISCUSSION PAPER NO. 1773R COWLES FOUNDATION

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2016 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2016 Instructor: Victor Aguirregabiria ECOOMETRICS II (ECO 24S) University of Toronto. Department of Economics. Winter 26 Instructor: Victor Aguirregabiria FIAL EAM. Thursday, April 4, 26. From 9:am-2:pm (3 hours) ISTRUCTIOS: - This is a closed-book

More information

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective

More information

A Bayesian perspective on GMM and IV

A Bayesian perspective on GMM and IV A Bayesian perspective on GMM and IV Christopher A. Sims Princeton University sims@princeton.edu November 26, 2013 What is a Bayesian perspective? A Bayesian perspective on scientific reporting views all

More information

Statistical Convergence of Kernel CCA

Statistical Convergence of Kernel CCA Statistical Convergence of Kernel CCA Kenji Fukumizu Institute of Statistical Mathematics Tokyo 106-8569 Japan fukumizu@ism.ac.jp Francis R. Bach Centre de Morphologie Mathematique Ecole des Mines de Paris,

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

On Standard Inference for GMM with Seeming Local Identi cation Failure

On Standard Inference for GMM with Seeming Local Identi cation Failure On Standard Inference for GMM with Seeming Local Identi cation Failure Ji Hyung Lee y Zhipeng Liao z First Version: April 4; This Version: December, 4 Abstract This paper studies the GMM estimation and

More information

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of

More information

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p GMM and SMM Some useful references: 1. Hansen, L. 1982. Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p. 1029-54. 2. Lee, B.S. and B. Ingram. 1991 Simulation estimation

More information

Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation

Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation Xu Cheng y Department of Economics Yale University First Draft: August, 27 This Version: December 28 Abstract In this paper,

More information

ON THE ASYMPTOTIC BEHAVIOR OF THE PRINCIPAL EIGENVALUES OF SOME ELLIPTIC PROBLEMS

ON THE ASYMPTOTIC BEHAVIOR OF THE PRINCIPAL EIGENVALUES OF SOME ELLIPTIC PROBLEMS ON THE ASYMPTOTIC BEHAVIOR OF THE PRINCIPAL EIGENVALUES OF SOME ELLIPTIC PROBLEMS TOMAS GODOY, JEAN-PIERRE GOSSE, AND SOFIA PACKA Abstract. This paper is concerned with nonselfadjoint elliptic problems

More information

Stochastic solutions of nonlinear pde s: McKean versus superprocesses

Stochastic solutions of nonlinear pde s: McKean versus superprocesses Stochastic solutions of nonlinear pde s: McKean versus superprocesses R. Vilela Mendes CMAF - Complexo Interdisciplinar, Universidade de Lisboa (Av. Gama Pinto 2, 1649-3, Lisbon) Instituto de Plasmas e

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2 1 A Good Spectral Theorem c1996, Paul Garrett, garrett@math.umn.edu version February 12, 1996 1 Measurable Hilbert bundles Measurable Banach bundles Direct integrals of Hilbert spaces Trivializing Hilbert

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Estimation and Inference with Weak Identi cation

Estimation and Inference with Weak Identi cation Estimation and Inference with Weak Identi cation Donald W. K. Andrews Cowles Foundation Yale University Xu Cheng Department of Economics University of Pennsylvania First Draft: August, 2007 Revised: March

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Generalized Method of Moments Estimation

Generalized Method of Moments Estimation Generalized Method of Moments Estimation Lars Peter Hansen March 0, 2007 Introduction Generalized methods of moments (GMM) refers to a class of estimators which are constructed from exploiting the sample

More information

Representations of moderate growth Paul Garrett 1. Constructing norms on groups

Representations of moderate growth Paul Garrett 1. Constructing norms on groups (December 31, 2004) Representations of moderate growth Paul Garrett Representations of reductive real Lie groups on Banach spaces, and on the smooth vectors in Banach space representations,

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

Minimax Estimation of a nonlinear functional on a structured high-dimensional model

Minimax Estimation of a nonlinear functional on a structured high-dimensional model Minimax Estimation of a nonlinear functional on a structured high-dimensional model Eric Tchetgen Tchetgen Professor of Biostatistics and Epidemiologic Methods, Harvard U. (Minimax ) 1 / 38 Outline Heuristics

More information

Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood

Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood Advances in Decision Sciences Volume 2012, Article ID 973173, 8 pages doi:10.1155/2012/973173 Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood

More information

Problem set 1 - Solutions

Problem set 1 - Solutions EMPIRICAL FINANCE AND FINANCIAL ECONOMETRICS - MODULE (8448) Problem set 1 - Solutions Exercise 1 -Solutions 1. The correct answer is (a). In fact, the process generating daily prices is usually assumed

More information

Chapter 2 Wick Polynomials

Chapter 2 Wick Polynomials Chapter 2 Wick Polynomials In this chapter we consider the so-called Wick polynomials, a multi-dimensional generalization of Hermite polynomials. They are closely related to multiple Wiener Itô integrals.

More information

Markov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1

Markov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1 Markov-Switching Models with Endogenous Explanatory Variables by Chang-Jin Kim 1 Dept. of Economics, Korea University and Dept. of Economics, University of Washington First draft: August, 2002 This version:

More information

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented

More information

On the Power of Tests for Regime Switching

On the Power of Tests for Regime Switching On the Power of Tests for Regime Switching joint work with Drew Carter and Ben Hansen Douglas G. Steigerwald UC Santa Barbara May 2015 D. Steigerwald (UCSB) Regime Switching May 2015 1 / 42 Motivating

More information