NUCLEAR NORM PENALIZED ESTIMATION OF INTERACTIVE FIXED EFFECT MODELS. Incomplete and Work in Progress. 1. Introduction
|
|
- Roland Cook
- 5 years ago
- Views:
Transcription
1 NUCLEAR NORM PENALIZED ESTIMATION OF IERACTIVE FIXED EFFECT MODELS HYUNGSIK ROGER MOON AND MARTIN WEIDNER Incomplete and Work in Progress. Introduction Interactive fixed effects panel regression models have been widely studied in recent panel literature. The interactive fixed effect model parsimoniously represents heterogeneity in both dimensions of the panel and includes the conventional additive error component model as a special case. One of widely used estimation methods of the interactive fixed effects panel regression is the fixed effect approach (or principal component approach in a linear model) that treats the interactive fixed effects in a factor form as parameters to estimate. For example, Bai (2009), Moon and Weidner (205a), and Moon and Weidner (205b)) investigated asymptotic properties of the fixed effect least squares estimator for linear panel regression models with interactive fixed effects, and Fernández-Val and Weidner (206) studied the fixed effect maximum likelihood estimator for nonlinear panel regressions with interactive fixed effects. The main advantage of the fixed effect approach is that it does not restrict the relationship between the unobserved heterogeneity and the observed explanatory variables. On the other hand, the computation of the fixed effects estimator requires solving a non-convex optimization problem with respect to high dimensional fixed effects parameters. Also, establishing consistency of the fixed effect estimator as an M-estimator is quite challenging due to the presence of the incidental parameters in both directions of panel. In this paper, we investigate a nuclear (trace) norm penalized estimator of interactive fixed effects models that is motivated by the two difficulties of the fixed effects estimator of interactive fixed effects panel regressions. This paper is incomplete and very preliminary. We provide asymptotic analysis of the nuclear norm penalized estimator with a baseline model where the relationship between the dependent variable and the regressors are linear, the panel is balanced, and the rank of the interactive fixed effects is finite and known. Date: January 28, 206. Other estimation methods in the interactive fixed effects literature include the quasi-difference approach in Holtz-Eakin, Newey, and Rosen (988) and the common correlated random effect method (e.g., Pesaran (2006)).
2 2 HYUNGSIK ROGER MOON AND MARTIN WEIDNER Extensions are in progress now. In the extensions, we consider more general models that allow heterogeneous regression coefficients, nonlinearity, and unbalanced panel. We also work on the estimators whose penalty threshold is small, which is closely related with unknown dimension of the interactive fixed effects. Potential empirical applications we consider include estimation of the students education performance model with teacher and student specific effects (generalized value added models), or modeling college admission decision as binary choice (with college and student specific effect). 2. General Model Let m(w it, z it ) be the objective function for i =,..., N and t =,..., T, where w it is observed, and z it = g it (x it ) + λ if t is a scalar single index that depends on the observed covariates x it and the unknown f it (.) and λ R N R and f R T R. The function m(.,.) is known. We define m it (z) = m(w it, z), which is a random function of the single index. We require that m it (z) is convex in z, and we assume that the true parameters solve the following population FOC for all i, t E [ z m it (zit) ] 0 zit 0 = 0, z 0 it = git(x 0 it ) + λ 0 i ft 0. The set of functions g it (x it ) can either be linearly parameterized in terms of parameters β such as a linear single index. Examples for m it (z) include (i) (Weighted) Least Squares Estimation with Interactive Effects: m it (z) = s it (y it z) 2, w it = (y it, s it ). The weights s it could be population weights, or could be binary variables that indicate missing data. (ii) MLE: such that log p(y it z) is concave in z. m it (z) = log p(y it z), (iii) (Smoothed) Quantile Regression. Without smoothing: where ρ τ (u) = u (τ (u < 0)). Examples for g it (x it ) are m it (z) = ρ τ (y it z), (i) Homogeneous Coefficients: g it (x it ) = g(x it ) = β x it. (ii) Heterogeneous Coefficient: g it (x it ) = (α i + γ t ) x it, where β it = α i + γ t and β = [β it ] it.
3 NUCLEAR NORM PENALIZED ESTIMATION OF IERACTIVE FIXED EFFECT MODELS 3 The fixed effects estimator studied in the existing literature is an M-estimator that solves the following minimize problem: ( β, λ, f ) Q (β, λf ) = argmin Q (β, λf ), β B,λ R N R,f R T R N i= T m it ((β X ) it + λ if t ). t= There are two difficulties with the M-estimator ( β, λ, f ). One is a computational issue and the other one is a theoretical issue. (i) Computational Issue: Calculating β, λ, f is complicated by the fact that the objective function Q (β, λ f ) is non-convex in the parameters and can have multiple local minima. For fixed β to optimization over λ and f only becomes easy for non-weighted balanced least-squares (principal components), but for weighted least squares (including missing data), MLE and quantile regression becomes challenging. In any case, after profiling out λ and f the resulting profile objective is in general still non-convex in β, so that high-dimensionality of β also causes potentially serious computational problems. (ii) Theoretical Issue: Consistency of β, λ, f is only shown in some cases, but for many of the above examples the consistency problem is unresolved. For the penalized estimator that we consider, we can show consistency for a much larger class of models. The consistency then carries over to the improved estimator that we consider in the second step that improved estimator may often be (asymptotically) equivalent to β, λ, f, but we do not show this, and it may not always be the case. 3. Penalization Estimation In the general model in Section 2, notice that only the N T matrix Γ = λf enters into the objective function Q (β, Γ). When the function m it (z) is convex in z and z is linear in β and Γ, the objective function Q (β, Γ) is now a convex function of the parameters (β, Γ). Assumption (Convexity of Q (β, Γ)). We assume that the objective function Q (β, Γ) is convex in β and Γ. Even thought the objective function Q (β, Γ) is convex in all parameters, the optimization problem with the rank restriction, min min Q (β, Γ) s.t. rank(γ) = (or ) R (3.) β Γ is non-convex because the constraint rank(γ) = (or ) R is non-convex. This implies that the objective function Q (β, λf ) is non-convex (in (β, λ, f).
4 4 HYUNGSIK ROGER MOON AND MARTIN WEIDNER Notice that the rank-constraint in (3.) can be considered as an L 0 penalty on the singular values of Γ. When replacing the L 0 penalty with the convex L penalty, then we obtain the convex objective function Q ψ (β, Γ) = Q (β, Γ) + 2 ψ s r (Γ), where ψ = ψ > 0 is a penalty parameter that needs to be chosen and s r (A) denotes the r th largest singular value of Γ. The penalty function Γ := s r (Γ) is a matrix norm called called trace norm, nuclear norm, Schatten -norm, or Ky Fan n- norm in literature. Notice that the nuclear norm can equivalently be defined as Γ = Tr [ (ΓΓ ) /2], or Γ = sup B Tr(BΓ). It satisfies A + B A + B (norm) and AB A B (submultiplicative). 2 By definition, we have c Γ + ( c )Γ 2 c Γ + ( c )Γ 2 = c Γ + ( c ) Γ 2. This implies that the penalty function Γ := s r (Γ) is convex in Γ. The estimator we consider in this paper is the following nuclear norm penalized estimator: ( β ψ, Γ ψ ) = argmin β B, Γ R N T Q ψ (β, Γ). 4. Asymptotics of the Nuclear Norm Panelized Estimator in a Baseline Model 4.. Baseline Model. Before we discuss the general model, we consider the following baseline model: Y it = β X it + λ if t + e it, which we write in N T matrix notation as Y = β X +λf +e. The baseline model assumes the following: (i) The panel is balanced, that is, Y it and X it are observed for all pairs i, t. (ii) The model is linear, given by (4.) above. (iii) The penalization parameter ] ψ = ψ is chosen such that as N, T, the probability of rank [ Γ(b, ψ ) = R 0 goes to one, for all b in an appropriate shrinking neighborhood of β. This means that asymptotically we estimate the correct number of factors. In practice, this essentially means that R 0 should be known. 2 The last inequality can be strengthened to AB A B.
5 NUCLEAR NORM PENALIZED ESTIMATION OF IERACTIVE FIXED EFFECT MODELS 5 Defining Γ := λf the model can equivalently be written as Y = β X + Γ + e, rank(γ) R 0. (4.) Here, β and Γ are the true unknown parameters, and we usually use b and G for generic parameter values. R 0 denotes the true number of factors. Notation: Matrix norms:. 2 spectral norm,. F Frobenius norm. s r (.) for r th largest singular value of a matrix. A for Moore-Penrose pseudo-inverse of A. Least Squares (or Fixed Effects) Estimator. For given R {0,, 2,...} the least squares estimator is given by [ β LS (R), Γ LS (R)] := argmin {b R K, G R N T : rank(g) R} Alternatively, we can obtain β LS (R) as β LS (R) = argmin b R K Q (b, R) := Q (b, R), min {G R N T : rank(g) R} Y b X G 2 F. Y b X G 2 F. Here, Q (b, R) is the profile least squares objective function. We also define Γ LS (b, R) := argmin {G R N T : rank(g) R} Y b X G 2 F. Nuclear Norm Penalized Estimator. For given ψ > 0 we define [ β(ψ), Γ(ψ) ] := argmin b R K, G R N T Y b X G 2 F + 2 ψ G. The additional factor 2 in the penalty term turns out to be convenient below. Alternatively, we can obtain β(ψ) as β(ψ) = argmin b R K S (b, ψ) := min G R N T S (b, ψ), ( Y b X G 2 F + 2 ψ G ). Here, S (b, ψ) is the profile penalized objective function. We also define. Γ(b, ψ) := argmin G R N T Y b X G 2 F + 2 ψ G 4.2. Asymptotic Results for the Baseline Case Least Squares Estimator. Here, we briefly summarize some known results on the least squares profile objective Q (b, R), and on the corresponding estimators β LS (R) and Γ LS (R).
6 6 HYUNGSIK ROGER MOON AND MARTIN WEIDNER Those results will be used afterwards to derive the asymptotic properties of the penalized estimator. For the profile least squares objective it is well-known that Q (b, R) := r=r+ [s r (Y β X)] 2. This representation of the profile objective in terms of singular values, or eigenvalues, 3 is useful both for the numerical evaluation of Q (b, R), and for the derivation of the asymptotic properties of the least squares estimator. Assumption 2. As N, T we have (i) s R0 (Γ/ ) P c > 0. (ii) e 2 = O P ( max(n, T ) ). (iii) X k 2 = O P ( ), for all k =,..., K. For k, l {,..., K} we define W,kl := Tr(M λx k M f X l), C (),k := Tr(M λ X k M f e ), C (2),k := [ Tr ( em f e M λ X k Γ ) + Tr ( e M λ e M f X k (Γ ) ) Let W be the K K matrix with elements W,kl, and let C () + Tr ( e M λ X k M f e (Γ ) ) ]. and C(2) be the K-vectors with elements C (),k M λ = M Γ and M f = M Γ. We prefer to write M λ and M f, because we find it slightly more and C(2),k, respectively. Remember that Γ = λf. We therefore have transparent. Notice also that Γ = f(f f) (λ λ) λ, and (Γ ) = (Γ ). The results in the following are known from (Moon and Weidner 205b). 4 Lemma 3. Let Assumption 2 hold. Consider N, T with N/T a > 0. Then, ( ) Q (b, R 0 ) = Q (β, R 0 ) 2 (b β) C () + C(2) + (b β) W (b β) + Q (rem) (b), 3 s r (Y β X) 2 is the r th largest eigenvalue of (Y β X)(Y β X). 4 The result for Γ LS (b, R 0 ) is not explicit in (Moon and Weidner 205b), but can easily derived, because Γ LS (b, R 0 ) = Γ (b β) X + e ê R0 (b), and results for ê R0 (b) are given.
7 and NUCLEAR NORM PENALIZED ESTIMATION OF IERACTIVE FIXED EFFECT MODELS 7 Γ LS (b, R 0 ) = Γ K (b k β k ) (X k M λ X k M f ) + e M λ e M f + Γ (rem) (b), k= where the remainder terms Q (rem) (b) and Γ (rem) (b) satisfy for any sequence c 0, Q (rem) (b) ( ) sup ( {b: b β c } + ) 2 = o P, b β Γ (rem) (b) sup ( 2 {b: b β c } + ) 2 = O P (). N b β Notice that Assumption 2(i) is slightly weaker than the strong factor assumption in (Moon and Weidner 205b), but sufficient to get all the results. We can give a more precise expansion of Γ LS (b, R 0 ), but it is not required for the following. If β LS (R 0 ) P β and W P W > 0 and C () = O P (), then using the expansion of Q (b, R 0 ) in Lemma 3 one can show that ( ) [ β LS (R 0 ) β] P C () + C(2). Using this one can derive the asymptotic distribution of β LS (R 0 ). W Nuclear Norm Penalized Estimator. For ψ > 0 we define g ψ : R + 0 R + 0 : s min [ψ 2, s 2 ] + 2ψ max [0, s ψ]. Thus, for s [0, ψ] we have g ψ (s) = s 2, and for s > ψ we have g ψ (s) = 2ψs ψ 2. Notice that g ψ (s) is continuous and convex. We then have S (b, ψ) := g ψ [s r (Y β X)]. (4.2) The derivation of equation (4.2) is given in the appendix. Define R (b, ψ) = I{s r (Y b X) > ψ}, the number of singular values of Y β X that is larger than ψ. Notice that R (b, ψ) = rank[ Γ(b, ψ)]. Comparing those formulas for S (b, ψ) and Q (b, R), we find that for small singular values, namely for s r (Y β X) ψ and r > R, respectively, the contribution of the singular value s r (Y β X) to both profile objective functions is given by the square of the singular value.
8 8 HYUNGSIK ROGER MOON AND MARTIN WEIDNER We therefore have 5 S (b, ψ) = Q (b, R (b, ψ)) + 2 ψ = Q (b, R (b, ψ)) + 2 ψ R (b,ψ) s r (Y β X) R (b, ψ) ψ 2 R (b, ψ) ψ 2. tr Γ LS (b, R (b, ψ)) In the last step we used that the non-zero singular values of Γ LS (b, R) are equal to s r (Y β X), r =,..., R, which is also well-known from principal components analysis. Asymptotic expansions of Q (b, R 0 ) and Γ LS (b, R 0 ) are given in Lemma 3. Using the expansion of Γ LS (b, R 0 ) one can show that under the assumptions of Lemma 3, and also assuming that P λ ep f 2 = O P (), we have, uniformly in any N-shrinking neighborhood of β, that Γ LS (b, R 0 ) = Γ (b β) D + O P (), where D is the K-vector with components D,k := Tr [ (λ λ) /2 λ X k f(f f) /2]. Notice that Tr [ (λ λ) /2 λ X k f(f f) /2] can also be written as Tr [X k B], where B := f(f f) /2 (λ λ) /2 λ satisfies B 2 and Γ = Tr(ΓB). This explains why Γ + = Γ + Tr( B) + O( 2 2). See also (Watson 992). From these, it is possible to deduce the following theorem: Theorem 4. Let Assumption 2 hold. Furthermore, assume that P λ ep f 2 = O P () and Tr(eX ) = O P (N /2 ). Consider N, T with N/T a > 0. Consider a sequence ψ > 0 such that R (β, ψ ) = R 0, wpa. Then, S (b, ψ ) = S (β, ψ ) 2 ψ (b β) D + (b β) W (b β) + S (rem) (b), where the remainder term S (rem) (b) satisfies for any sequence c 0, S (rem) (b) sup +ψ {b: b β c and R (b,ψ )=R 0 } log N + N+ψ b β + b β = o 2 P (). From the expansion in Theorem 4, we want to conclude that β(ψ ) β = ψ ( ) W D ψ + o P. (4.3) 5 One can also show that the nonzero singular values of Γ(b, ψ) are equal to s r (Y β X) ψ, for r =,..., R (b, ψ). We therefore have S (b, ψ) = Q (b, R (b, ψ)) + 2 ψ Γ(b, ψ) / + R (b, ψ) ψ 2 /.
9 NUCLEAR NORM PENALIZED ESTIMATION OF IERACTIVE FIXED EFFECT MODELS 9 However, the bound on the remainder term of the expansion is only applicable if R (b, ψ ) = R 0. Therefore, we can only conclude (4.3) if R (b, ψ ) = R 0 holds within a sufficiently large neighborhood of b = β + ψ W D. This can be achieved by choosing ψ appropriately, which will now be discussed. The strong factor Assumption 2(i) guarantees that for ψ = o P ( ) we have, in any shrinking neighborhood of β, that s R0 (Y b X) > ψ, implying that R (b, ψ ) R 0. We furthermore have s R0 + (Y b X) = s [ê LS (b, R 0 ) ] = s [M λ (Y b X)M f ] + lower order terms e + M λ [(b β) X]M f + lower order terms. The last bound is crude and needs to be improved if we want to consider ψ = O P ( N). However, if ψ / N, then e in the last expression can be neglected for the question whether s R0 + (Y b X) < ψ. Thus, if ψ = o P ( ) and ψ / N and M λ [(b β) X]M f /ψ < ɛ, wpa, for some ɛ > 0, then R (b, ψ ) = R 0, wpa. We thus obtain the following. Theorem 5. Let Assumption 2 hold. Furthermore, assume that P λ ep f 2 = O P () and Tr(eX ) = O P (N /2 ). Consider N, T with N/T a > 0. Consider a sequence ψ > 0 such that ψ = o P ( ) and ψ / N. Assume furthermore that W P W > 0 and that there exists ɛ > 0 such that K ( ) W M λ X k M f D < ɛ, wpa. (4.4) k Then we have k= ψ ] [ β(ψ ) β P W D Bias Corrected Estimattor. For the baseline case (linear model, balanced panel, known R 0 ) a good way to calculate an improved estimator is to start from β (0) = β(ψ ) and then iterate the following: () Given β (j), calculate λ (j) and f (j) as the R 0 principal components of Y β (j) X. (2) Given λ (j) and f (j) calculate the K K matrix Ŵ (j) kl = Tr(M λ(j)x k M f (j)x l ) and the K-vector Â(j) k = Tr(M λ(j)x k M f (j)y ), and update the estimator for β as β (j+) = (Ŵ (j)) Â(j).
10 0 HYUNGSIK ROGER MOON AND MARTIN WEIDNER Starting with ψ = a N log N, for some constant a, ander the above assumptions, as N, T grow at the same rate, this will give an estimator that is asymptotically equivalent to after very few iterations. Alternatively, we could bias correct β(ψ ) directly, e.g. function S BC (b, ψ ) = S (b, ψ ) + 2 ψ (b β) D, β LS by modifying the objective where the estimator D uses the first stage estimator for λ and f. While this method seems overly complicated in the baseline case, it might be a very convenient method for non-linear models or unbalanced panels. 5. Extension to General Model Extensions under progress include models that allow heterogeneous regression coefficients, some nonlinear regression model, unbalanced panel. We also work on the estimators whose penalty threshold is small, which is closely related with unknown dimension of the interactive fixed effects.
11 NUCLEAR NORM PENALIZED ESTIMATION OF IERACTIVE FIXED EFFECT MODELS Appendix A. Derivation of Equation (4.2) S (b, ψ) = min Q (β, Γ) Γ R N T = min Γ R N T = min { [sr (Y β X Γ)] 2 + 2ψs r (Γ) } min min { γ R γ 0} { U R N U U=I} { V R T V V =I} = min { γ R γ 0} = min { γ R γ 0} = = min { α R α 0} {[s r (Y β X Udiag(γ)V )] 2 + 2ψγ r } { [sr ( )] } Y β X UY β X diag(γ)v Y 2 β X + 2ψγr { [sr (Y β X) γ r ] 2 + 2ψγ r } { [sr (Y β X) α] 2 + 2ψα } { min [ ψ 2, s r (Y β X) 2] + 2ψ max [0, s r (Y β X) ψ] }. Here, in the first step we rewrote the sum of squared residuals as the sum of squared singular values. In the second step we introduced the singular value decomposition Γ = Udiag(γ)V, thus replacing the minimization over Γ by a minimization over the singular vector matrices U and V and the singular value vector γ. In the third step we use that for any given γ the minimizing U and V are equal to the corresponding singular vector matrices of Y β X, denoted by U Y β X and V Y β X (This is the only step in the derivation that is not straightforward, we might want to provide a lemma in the appendix for this. Notice also that the minimizing singular vectors might not be unique, in particular they are not unique for γ r = 0.). In the fourth step we use that the singular values of Y β X U Y β X diag(γ)v Y β X are equal to s r (Y β X) γ r (This assumes s r (Y β X) γ r 0, which is always satisfied for the optimal γ.). The fifth step interchanges the sum and the minimization. The final step solves the minimization problem over α (the optimal α is α = max[0, s r (Y β X) ψ]). References Bai, J. (2009): Panel data models with interactive fixed effects, Econometrica, 77(4),
12 2 HYUNGSIK ROGER MOON AND MARTIN WEIDNER Fernández-Val, I., and M. Weidner (206): Individual and time effects in nonlinear panel data models with large N, T, forthcoming in Journal of Econometrics. Holtz-Eakin, D., W. Newey, and H. S. Rosen (988): Estimating Vector Autoregressions with Panel Data, Econometrica, 56(6), Moon, H., and M. Weidner (205a): Dynamic Linear Panel Regression Models with Interactive Fixed Effects, Forthcoming in Econometric Theory. Moon, H. R., and M. Weidner (205b): Linear Regression for Panel With Unknown Number of Factors as Interactive Fixed Effects, Econometrica, 83(4), Pesaran, M. H. (2006): Estimation and Inference in Large Heterogeneous Panels with a Multifactor Error Structure, Econometrica, 74(4), Watson, G. A. (992): Characterization of the subdifferential of some matrix norms, Linear Algebra and its Applications, 70, Department of Economics and USC Dornsife INET, University of Southern California, Los Angeles, CA 90089, U.S.A. Department of Economics, University College London, Gower Street, London WCE 6BT, U.K., and CeMMaP.
Nuclear Norm Regularized Estimation of Panel Regression Models
Nuclear Norm Regularized Estimation of Panel Regression Models Hyungsik Roger Moon Martin Weidner February 3, 08 Abstract In this paper we investigate panel regression models with interactive fixed effects.
More informationRandom Matrix Theory and its Applications to Econometrics
Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of
More informationLikelihood Expansion for Panel Regression Models with Factors
Likelihood Expansion for Panel Regression Models with Factors Hyungsik Roger Moon Martin Weidner May 2009 Abstract In this paper we provide a new methodology to analyze the Gaussian profile quasi likelihood
More informationDynamic Linear Panel Regression Models with Interactive Fixed Effects
Dynamic Linear Panel Regression Models with Interactive Fixed Effects Hyungsik Roger Moon Martin Weidner The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP47/4 Dynamic
More informationLikelihood Expansion for Panel Regression Models with Factors
Likelihood Expansion for Panel Regression Models with Factors Hyungsik Roger Moon Martin Weidner April 29 Abstract In this paper we provide a new methodology to analyze the Gaussian profile quasi likelihood
More informationNuclear Norm Regularized Estimation of Panel Regression Models arxiv: v1 [econ.em] 25 Oct 2018
Nuclear Norm Regularized Estimation of Panel Regression Models arxiv:80.0987v [econ.em] 25 Oct 208 Hyungsik Roger Moon Martin Weidner October 26, 208 Abstract In this paper we investigate panel regression
More informationLinear Regression for Panel with Unknown Number of Factors as Interactive Fixed Effects
Linear Regression for Panel with Unnown Number of Factors as Interactive Fixed Effects Hyungsi Roger Moon Martin Weidne February 27, 22 Abstract In this paper we study the Gaussian quasi maximum lielihood
More informationLinear Regression for Panel with Unknown Number of Factors as Interactive Fixed Effects
Linear Regression for Panel with Unknown Number of Factors as Interactive Fixed Effects Hyungsik Roger Moon Martin Weidner December 26, 2014 Abstract In this paper we study the least squares (LS) estimator
More informationLinear regression for panel with unknown number of factors as interactive fixed effects
Linear regression for panel with unknown number of factors as interactive fixed effects Hyungsik Roger Moon Martin Weidner The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper
More informationFixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility
American Economic Review: Papers & Proceedings 2016, 106(5): 400 404 http://dx.doi.org/10.1257/aer.p20161082 Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility By Gary Chamberlain*
More informationLasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices
Article Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Fei Jin 1,2 and Lung-fei Lee 3, * 1 School of Economics, Shanghai University of Finance and Economics,
More informationNonlinear Panel Models with Interactive Effects
onlinear Panel Models with Interactive Effects Mingli Chen Iván Fernández-Val Martin Weidner December 7, 204 Abstract his paper considers estimation and inference on semiparametric nonlinear panel single
More informationQuantile Regression for Panel Data Models with Fixed Effects and Small T : Identification and Estimation
Quantile Regression for Panel Data Models with Fixed Effects and Small T : Identification and Estimation Maria Ponomareva University of Western Ontario May 8, 2011 Abstract This paper proposes a moments-based
More informationON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS
ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity
More informationSpring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM
University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 14 GEE-GMM Throughout the course we have emphasized methods of estimation and inference based on the principle
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationINFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction
INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION VICTOR CHERNOZHUKOV CHRISTIAN HANSEN MICHAEL JANSSON Abstract. We consider asymptotic and finite-sample confidence bounds in instrumental
More informationNonparametric panel data models with interactive fixed effects
Nonparametric panel data models with interactive fixed effects Joachim Freyberger Department of Economics, University of Wisconsin - Madison November 3, 2012 Abstract This paper studies nonparametric panel
More informationEstimation of random coefficients logit demand models with interactive fixed effects
Estimation of random coefficients logit demand models with interactive fixed effects Hyungsik Roger Moon Matthew Shum Martin Weidner The Institute for Fiscal Studies Department of Economics, UCL cemmap
More informationMissing dependent variables in panel data models
Missing dependent variables in panel data models Jason Abrevaya Abstract This paper considers estimation of a fixed-effects model in which the dependent variable may be missing. For cross-sectional units
More informationSpecification Test for Instrumental Variables Regression with Many Instruments
Specification Test for Instrumental Variables Regression with Many Instruments Yoonseok Lee and Ryo Okui April 009 Preliminary; comments are welcome Abstract This paper considers specification testing
More informationQuaderni di Dipartimento. Estimation Methods in Panel Data Models with Observed and Unobserved Components: a Monte Carlo Study
Quaderni di Dipartimento Estimation Methods in Panel Data Models with Observed and Unobserved Components: a Monte Carlo Study Carolina Castagnetti (Università di Pavia) Eduardo Rossi (Università di Pavia)
More informationLinear Models in Machine Learning
CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,
More informationThe exact bias of S 2 in linear panel regressions with spatial autocorrelation SFB 823. Discussion Paper. Christoph Hanck, Walter Krämer
SFB 83 The exact bias of S in linear panel regressions with spatial autocorrelation Discussion Paper Christoph Hanck, Walter Krämer Nr. 8/00 The exact bias of S in linear panel regressions with spatial
More informationUltra High Dimensional Variable Selection with Endogenous Variables
1 / 39 Ultra High Dimensional Variable Selection with Endogenous Variables Yuan Liao Princeton University Joint work with Jianqing Fan Job Market Talk January, 2012 2 / 39 Outline 1 Examples of Ultra High
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More information1 Estimation of Persistent Dynamic Panel Data. Motivation
1 Estimation of Persistent Dynamic Panel Data. Motivation Consider the following Dynamic Panel Data (DPD) model y it = y it 1 ρ + x it β + µ i + v it (1.1) with i = {1, 2,..., N} denoting the individual
More informationAveraging Estimators for Regressions with a Possible Structural Break
Averaging Estimators for Regressions with a Possible Structural Break Bruce E. Hansen University of Wisconsin y www.ssc.wisc.edu/~bhansen September 2007 Preliminary Abstract This paper investigates selection
More informationModel Selection and Geometry
Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationStatistical Estimation
Statistical Estimation Use data and a model. The plug-in estimators are based on the simple principle of applying the defining functional to the ECDF. Other methods of estimation: minimize residuals from
More informationIndividual and Time Effects in Nonlinear Panel Data Models with Large N, T
Individual and Time Effects in Nonlinear Panel Data Models with Large N, T Iván Fernández-Val BU Martin Weidner UCL November 2, 2011 Preliminary and incomplete Abstract Fixed effects estimators of panel
More informationLeast Squares Estimation-Finite-Sample Properties
Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions
More informationDiscussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis
Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,
More informationStatistical Data Mining and Machine Learning Hilary Term 2016
Statistical Data Mining and Machine Learning Hilary Term 2016 Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/sdmml Naïve Bayes
More informationJinyong Hahn. Department of Economics Tel: (310) Bunche Hall Fax: (310) Professional Positions
Jinyong Hahn Department of Economics Tel: (310) 825-2523 8283 Bunche Hall Fax: (310) 825-9528 Mail Stop: 147703 E-mail: hahn@econ.ucla.edu Los Angeles, CA 90095 Education Harvard University, Ph.D. Economics,
More informationEstimation of Random Coefficients Logit Demand Models with Interactive Fixed Effects
Estimation of Random Coefficients Logit Demand Models with Interactive Fixed Effects Hyungsik Roger Moon Matthew Shum Martin Weidner First draft: October 2009; This draft: October 4, 205 Abstract We extend
More informationProgram Evaluation with High-Dimensional Data
Program Evaluation with High-Dimensional Data Alexandre Belloni Duke Victor Chernozhukov MIT Iván Fernández-Val BU Christian Hansen Booth ESWC 215 August 17, 215 Introduction Goal is to perform inference
More information10-725/36-725: Convex Optimization Prerequisite Topics
10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the
More informationEfficient Estimation of Dynamic Panel Data Models: Alternative Assumptions and Simplified Estimation
Efficient Estimation of Dynamic Panel Data Models: Alternative Assumptions and Simplified Estimation Seung C. Ahn Arizona State University, Tempe, AZ 85187, USA Peter Schmidt * Michigan State University,
More informationNonconcave Penalized Likelihood with A Diverging Number of Parameters
Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized
More informationBirkbeck Working Papers in Economics & Finance
ISSN 1745-8587 Birkbeck Working Papers in Economics & Finance Department of Economics, Mathematics and Statistics BWPEF 1809 A Note on Specification Testing in Some Structural Regression Models Walter
More informationForecast comparison of principal component regression and principal covariate regression
Forecast comparison of principal component regression and principal covariate regression Christiaan Heij, Patrick J.F. Groenen, Dick J. van Dijk Econometric Institute, Erasmus University Rotterdam Econometric
More informationCALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR
J. Japan Statist. Soc. Vol. 3 No. 200 39 5 CALCULAION MEHOD FOR NONLINEAR DYNAMIC LEAS-ABSOLUE DEVIAIONS ESIMAOR Kohtaro Hitomi * and Masato Kagihara ** In a nonlinear dynamic model, the consistency and
More informationLog Covariance Matrix Estimation
Log Covariance Matrix Estimation Xinwei Deng Department of Statistics University of Wisconsin-Madison Joint work with Kam-Wah Tsui (Univ. of Wisconsin-Madsion) 1 Outline Background and Motivation The Proposed
More informationIV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade
IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade Denis Chetverikov Brad Larsen Christopher Palmer UCLA, Stanford and NBER, UC Berkeley September
More information11. Bootstrap Methods
11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationThe properties of L p -GMM estimators
The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion
More informationDelta Theorem in the Age of High Dimensions
Delta Theorem in the Age of High Dimensions Mehmet Caner Department of Economics Ohio State University December 15, 2016 Abstract We provide a new version of delta theorem, that takes into account of high
More informationChristopher Dougherty London School of Economics and Political Science
Introduction to Econometrics FIFTH EDITION Christopher Dougherty London School of Economics and Political Science OXFORD UNIVERSITY PRESS Contents INTRODU CTION 1 Why study econometrics? 1 Aim of this
More informationMachine learning, shrinkage estimation, and economic theory
Machine learning, shrinkage estimation, and economic theory Maximilian Kasy December 14, 2018 1 / 43 Introduction Recent years saw a boom of machine learning methods. Impressive advances in domains such
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationQuantile Regression for Panel/Longitudinal Data
Quantile Regression for Panel/Longitudinal Data Roger Koenker University of Illinois, Urbana-Champaign University of Minho 12-14 June 2017 y it 0 5 10 15 20 25 i = 1 i = 2 i = 3 0 2 4 6 8 Roger Koenker
More informationEstimation of Random Coefficients Logit Demand Models with Interactive Fixed Effects
Estimation of Random Coefficients Logit Demand Models with Interactive Fixed Effects Hyungsik Roger Moon Matthew Shum Martin Weidner First draft: October 2009; This draft: September 5, 204 Abstract We
More informationLeast Squares Estimation of a Panel Data Model with Multifactor Error Structure and Endogenous Covariates
Least Squares Estimation of a Panel Data Model with Multifactor Error Structure and Endogenous Covariates Matthew Harding and Carlos Lamarche January 12, 2011 Abstract We propose a method for estimating
More informationLecture 2 Part 1 Optimization
Lecture 2 Part 1 Optimization (January 16, 2015) Mu Zhu University of Waterloo Need for Optimization E(y x), P(y x) want to go after them first, model some examples last week then, estimate didn t discuss
More informationSection 9: Generalized method of moments
1 Section 9: Generalized method of moments In this section, we revisit unbiased estimating functions to study a more general framework for estimating parameters. Let X n =(X 1,...,X n ), where the X i
More informationGMM ESTIMATION OF SHORT DYNAMIC PANEL DATA MODELS WITH INTERACTIVE FIXED EFFECTS
J. Japan Statist. Soc. Vol. 42 No. 2 2012 109 123 GMM ESTIMATION OF SHORT DYNAMIC PANEL DATA MODELS WITH INTERACTIVE FIXED EFFECTS Kazuhiko Hayakawa* In this paper, we propose GMM estimators for short
More informationCS540 Machine learning Lecture 5
CS540 Machine learning Lecture 5 1 Last time Basis functions for linear regression Normal equations QR SVD - briefly 2 This time Geometry of least squares (again) SVD more slowly LMS Ridge regression 3
More informationAsymptotic Distributions of Instrumental Variables Statistics with Many Instruments
CHAPTER 6 Asymptotic Distributions of Instrumental Variables Statistics with Many Instruments James H. Stock and Motohiro Yogo ABSTRACT This paper extends Staiger and Stock s (1997) weak instrument asymptotic
More informationTotal Least Squares Approach in Regression Methods
WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics
More informationTIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.
TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION
More informationDeep Linear Networks with Arbitrary Loss: All Local Minima Are Global
homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima
More informationTesting Overidentifying Restrictions with Many Instruments and Heteroskedasticity
Testing Overidentifying Restrictions with Many Instruments and Heteroskedasticity John C. Chao, Department of Economics, University of Maryland, chao@econ.umd.edu. Jerry A. Hausman, Department of Economics,
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationQuantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be
Quantile methods Class Notes Manuel Arellano December 1, 2009 1 Unconditional quantiles Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be Q τ (Y ) q τ F 1 (τ) =inf{r : F
More informationPreface to Second Edition... vii. Preface to First Edition...
Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and
More informationMAXIMUM LIKELIHOOD IN GENERALIZED FIXED SCORE FACTOR ANALYSIS 1. INTRODUCTION
MAXIMUM LIKELIHOOD IN GENERALIZED FIXED SCORE FACTOR ANALYSIS JAN DE LEEUW ABSTRACT. We study the weighted least squares fixed rank approximation problem in which the weight matrices depend on unknown
More informationEconometric Analysis of Cross Section and Panel Data
Econometric Analysis of Cross Section and Panel Data Jeffrey M. Wooldridge / The MIT Press Cambridge, Massachusetts London, England Contents Preface Acknowledgments xvii xxiii I INTRODUCTION AND BACKGROUND
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationHigh Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data
High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data Song Xi CHEN Guanghua School of Management and Center for Statistical Science, Peking University Department
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationSemiparametric Identification in Panel Data Discrete Response Models
Semiparametric Identification in Panel Data Discrete Response Models Eleni Aristodemou UCL March 8, 2016 Please click here for the latest version. Abstract This paper studies partial identification in
More informationOptimization Problems
Optimization Problems The goal in an optimization problem is to find the point at which the minimum (or maximum) of a real, scalar function f occurs and, usually, to find the value of the function at that
More informationBayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap
Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap Dale J. Poirier University of California, Irvine September 1, 2008 Abstract This paper
More informationProblem Set #6: OLS. Economics 835: Econometrics. Fall 2012
Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.
More informationHigh Dimensional Covariance and Precision Matrix Estimation
High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance
More informationSparse orthogonal factor analysis
Sparse orthogonal factor analysis Kohei Adachi and Nickolay T. Trendafilov Abstract A sparse orthogonal factor analysis procedure is proposed for estimating the optimal solution with sparse loadings. In
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationIntroduction to Econometrics
Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle
More informationEstimation of random coefficients logit demand models with interactive fixed effects
Estimation of random coefficients logit demand models with interactive fixed effects Hyungsik Roger Moon Matthew Shum Martin Weidner The Institute for Fiscal Studies Department of Economics, UCL cemmap
More informationProximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725
Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:
More informationALGORITHM CONSTRUCTION BY DECOMPOSITION 1. INTRODUCTION. The following theorem is so simple it s almost embarassing. Nevertheless
ALGORITHM CONSTRUCTION BY DECOMPOSITION JAN DE LEEUW ABSTRACT. We discuss and illustrate a general technique, related to augmentation, in which a complicated optimization problem is replaced by a simpler
More informationA Course in Applied Econometrics Lecture 18: Missing Data. Jeff Wooldridge IRP Lectures, UW Madison, August Linear model with IVs: y i x i u i,
A Course in Applied Econometrics Lecture 18: Missing Data Jeff Wooldridge IRP Lectures, UW Madison, August 2008 1. When Can Missing Data be Ignored? 2. Inverse Probability Weighting 3. Imputation 4. Heckman-Type
More informationAsymptotic distributions of the quadratic GMM estimator in linear dynamic panel data models
Asymptotic distributions of the quadratic GMM estimator in linear dynamic panel data models By Tue Gørgens Chirok Han Sen Xue ANU Working Papers in Economics and Econometrics # 635 May 2016 JEL: C230 ISBN:
More informationAn Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables
More information2.3. Clustering or vector quantization 57
Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :
More informationInference For High Dimensional M-estimates. Fixed Design Results
: Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and
More informationFlexible Estimation of Treatment Effect Parameters
Flexible Estimation of Treatment Effect Parameters Thomas MaCurdy a and Xiaohong Chen b and Han Hong c Introduction Many empirical studies of program evaluations are complicated by the presence of both
More informationVariable Selection in Predictive Regressions
Variable Selection in Predictive Regressions Alessandro Stringhi Advanced Financial Econometrics III Winter/Spring 2018 Overview This chapter considers linear models for explaining a scalar variable when
More informationPanel Threshold Regression Models with Endogenous Threshold Variables
Panel Threshold Regression Models with Endogenous Threshold Variables Chien-Ho Wang National Taipei University Eric S. Lin National Tsing Hua University This Version: June 29, 2010 Abstract This paper
More informationAsymptotic Properties of Empirical Likelihood Estimator in Dynamic Panel Data Models
Asymptotic Properties of Empirical Likelihood Estimator in Dynamic Panel Data Models Günce Eryürük North Carolina State University February, 2009 Department of Economics, Box 80, North Carolina State University,
More informationCURRENT STATUS LINEAR REGRESSION. By Piet Groeneboom and Kim Hendrickx Delft University of Technology and Hasselt University
CURRENT STATUS LINEAR REGRESSION By Piet Groeneboom and Kim Hendrickx Delft University of Technology and Hasselt University We construct n-consistent and asymptotically normal estimates for the finite
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationQuantile Processes for Semi and Nonparametric Regression
Quantile Processes for Semi and Nonparametric Regression Shih-Kang Chao Department of Statistics Purdue University IMS-APRM 2016 A joint work with Stanislav Volgushev and Guang Cheng Quantile Response
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More information