Proceedings of the 2014 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds.
|
|
- Thomasina Berry
- 5 years ago
- Views:
Transcription
1 Proceedings of the 204 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds. ROBUST RARE-EVENT PERFORMANCE ANALYSIS WITH NATURAL NON-CONVEX CONSTRAINTS Jose Blanchet IEOR Department Columbia University 340 W. Mudd Building 500 W 20th Street New York, NY 0027, USA Christopher Dolan Department of Statistics Columbia University 02 School of Social Work 255 Amsterdam Avenue New York, NY 0027, USA Henry Lam Department of Mathematics and Statistics Boston University Cummington Mall Boston, MA 0225, USA ABSTRACT We consider a common type of robust performance analysis that is formulated as imizing an expectation among all probability models that are within some tolerance of a baseline model in the Kullback-Leibler sense. The solution of such concave program is tractable and provides an upper bound which is robust to model misspecification. However, this robust formulation fails to preserve some natural stochastic structures, such as model assumptions, and as a consequence, the upper bounds might be pessimistic. Unfortunately, the introduction of assumptions as constraints renders the underlying optimization problem very challenging to solve. We illustrate these phenomena in the rare event setting, and propose a large-deviations based approach for solving this challenging problem in an asymptotic sense for a natural class of random walk problems. INTRODUCTION Robust performance analysis is concerned with the problem of evaluating the worst case performance measure of interest (typically described as an expectation) among all plausible probability models, such as those within certain tolerance of a baseline model which is believed to be reflective of reality. Taken literally, this problem formulation can be challenging because it gives rise to an infinite dimensional optimization problem (note that we mentioned all models within certain tolerance). When the tolerance region is described in terms of Kullback-Leibler divergence (and other related notions), this apparently daunting optimization problem is often tractable, and this tractability feature has been exploited in a range of literature in recent years, for example in control theory (Iyengar (2005), Nilim and El Ghaoui (2005), Petersen et al. (2000)), distributionally robust optimization (Ben-Tal et al. 203), finance (Glasserman and Xu 204), economics (Hansen and Sargent 2008) and queueing (Jain et al. 200). Tolerance regions based on the Kullback-Leibler divergence, however, fail to incorporate information that is often quite natural to assume in common stochastic settings, and that should be added in terms of
2 constraints in the underlying robust performance analysis formulation. One such natural and important constraint is the property, often arising in models involving random walk input. Failing to inform the property even in simple situations involving random walk models can have important consequences in terms of the accurate assessment of worst case performance measures of interest. Unfortunately, however, the property as an additional constraint on top of the Kullback-Leibler imposed tolerance gives rise to an optimization problem that is no longer easy to handle. The main contribution of this paper is to show that in the context of performance analysis for performance measures associated to large deviations events, such robust formulation yields to a problem for which asymptotically optimal solutions can be constructed. We illustrate our ideas in the setting of random walks. The rest of the paper is organized as follows. In Section 2 we provide a precise mathematical formulation of the robust performance analysis problem with constraints and explain why the problem is very challenging. In Section 3 we provide a strategy that allows to solve this challenging problem asymptotically in a large deviations regime. Finally, we provide numerical examples which illustrate the performance of our proposed solution and the impact of adding constraints in the robust formulation. 2 PROBLEM FORMULATION Let {X k : k 0} be a sequence of zero mean random variables. Define S 0 = 0 and put S n = X +...+X n. Let us use F ( ) to denote the CDF (Cumulative Distribution Function) of X i, that is, P(X i x) = F (x) and we use P F ( ) to denote the product measure generated by F ( ). We use PF n ( ) to denote the projection of P F ( ) into its n first coordinates. Simply put, PF n describes the joint distribution of the random variables (X,...,X n ). The expectation operator associated to P F ( ) and PF n ( ) is denoted by E F ( ) and EF n ( ), respectively. We define ψ F (θ) = logef exp(θx ) and assume that ψ F (θ) < for θ in a neighborhood of the origin. Now, define A n = {S n /n A} for a closed set A which does not contain the origin. We are concerned with the problem of estimating P F (A n ). Observe that P F (A n ) 0 as n because of the law of large numbers. Moreover, because ψ F ( ) is finite in a neighborhood of the origin we have that P F (A n ) exp( δn) for some δ > 0 for all n sufficiently large. In contrast to standard rare event estimation problems, however, here we assume that F ( ) is unknown. Nevertheless, based on some evidence (for example based on data or expert knowledge) let us assume that we have obtained a CDF G( ), which approximates F ( ) in a suitable sense, for example in the Kullback-Leibler divergence sense which we shall review momentarily. Let us write P G ( ) to denote the product measure associated to G( ) and we use E G ( ) for the expectation operator corresponding to P G ( ). Similarly as before, PG n ( ) is the projection of P G ( ) onto its n first coordinates and we use EG n ( ) to denote the expectation operator associated to PG n ( ). We assume that the likelihood ratio dpf n/dpn G is well defined and therefore the Kullback-Leibler of Pn F with respect to PG n is defined via ) ( ) dp R(PF P n G) n = EF log( n n F dp dpg n = nef ( ) log F df dpg (X ) = n log dg (x) df (x). If dpf n/dpn G fails to exist (i.e. Pn F is not absolutely continuous with respect to Pn G ), then the Kullback-Leibler divergence is defined as infinity. It is elementary to verify that R( PG n ) is convex (actually R( ) is convex in both of its arguments (Dupuis and Ellis 997)). The associated robust performance analysis problem with Kullback-Leibler constraint consists in solving Q n {Qn (A n ) : R(Q n P n G) η n }, ()
3 where η n should be chosen to satisfy η n n ( ) df log dg (x) df (x). One might select η n by estimating log(df/dg(x))df (x) using available data. The optimization problem () is a concave mathematical program; the objective function to imize is linear (in particular concave) in the variable Q n and, as mentioned earlier, the constraint is convex. Moreover, as we shall see in the body of the paper, the optimal solution to (), Q n ( ), can be characterized as a suitable mixture between PG n ( A n) and PG n ( Ac n). As the next result shows, it turns out that Q n (A n ) might differ substantially from PF n (A n) even if F is close to G in the Kullback-Leibler sense, that is, even in cases in which η n = o(n) as n. In more detail, typically we will have PF n (A n) = exp( nī F (A) + o(n)), for some positive constant Ī F (A), whereas the next result indicates that typically Q n (A n ) δη n /n for some δ > 0 and large enough n. So, for example, if one builds an approximation G to F from data, one would need an exponentially large sample size (in n) in order to obtain an accurate estimate of the probability of interest using only the relative entropy constraint without recognizing that the data might have come from an model. Theorem Suppose that η n = o(n) and that η n > δ > 0 for some δ > 0 uniformly over n. Assume also that P G (A n ) (exp( n/δ),exp( δn)) for some δ > 0 and all n sufficiently large. Then the optimal value of (), Q n (A n ), satisfies Q n η n (A n ) = logpg n (A ( + o()) n) as n. One of the main reasons for such a disparity, as we shall establish in the next section, is that the feasible region (i.e. {Q n : R ( Q n PG) n ηn }) fails to recognize that we are interested only in models for which the property of the X i s is preserved. So, introducing the constraint transforms problem () into the alternative form H { PH n (A n ) =... ( ) x x n I A dh (x )...dh (x n ) : n n ( } dh log )dh dg (x) (x) η n. (2) Observe that the previous problem is not a concave program because the objective function to imize is no longer concave. Unfortunately, in general (2) is very challenging to solve. In the next secion we explain how to use large deviations theory to solve problem (2) in an asymptotic sense. We finish this section with a proof of our first theorem. 2. Proof of Theorem Proof. To solve (), we rewrite it in terms of the likelihood ratio between Q n and G n, namely L = dq n /dpg n, as EG n [L;A n] subject to EG n [LlogL] η (3) n, where the imization is over L L = {L 0 : EG n L = } and consider the Lagrangian relaxation L L En G[L;A n ] α(e n G[LlogL] η n ). (4) Our goal is to find α 0 such that there is an L that solves () and moreover that E n G [L logl ] = η n. Then this L will be optimal for () (c.f. p.220, Theorem in Luenberger (997)). Note that when α = 0, the optimal value to the imization in (4) is, by choosing L to concentrate only on the event A. So the
4 case α = 0 is easily ruled out. Now, given any fixed α > 0, it can be verified by a convexity argument that the solution to the imization () is given by L e I(A n)/α = e /α I(A n ), (5) where I (A n ) denotes the indicator function of the set A n ; see, for example, Hansen and Sargent (2008). Now we write α n for α to highlight ) the role of n, and introduce β n = /α n for convenience. We also write p n = P G (A n ) and put q n = P G (Ān = pn. Then (5) can be written as L = { e βn p n e βn +q n p n e βn +q n We now proceed to find α n > 0, or β n = /α n, such that on A n on A c n. (6) Using the form of (6), (7) becomes E n G[L logl ] = η n. (7) β n p n e β n p n e β n + qn log(p n e β n + q n ) = η n. (8) Since η n > δ > 0 and p n 0 as n, we must have that all β n satisfying (8) must also satisfy β n as n. Otherwise the left hand side converges to zero on some subsequence while the right hand side stays positively bounded away from zero. Now, we claim that limsup p n e β n = 0. Let us proceed assuming this claim for the moment and come back to this issue at the end of our proof. Then, by a Taylor series expansion applied to the left hand side of (8), we have that which gives Heuristically, we must have β n p n e β n ( + o()) = η n, (9) logβ n + log p n + β n + o() = logη n. β n = logη n log p n logβ n + o() = (logη n log p n )( + o()). (0) To verify (0) rigorously, note that when we choose β n = log(η n /p n ), the left hand side of (9) becomes η n (log(η n /p n ))(+o()) which is much more larger than η n for n large enough. On the other hand, setting β n = 0 gives the left hand side o(). Therefore, by continuity there must be a solution to (9) in the range [0,log(η n /p n )]. Consequently, we have that β n = logη n log p n logβ n + o() = logη n log p n + r n () where the remainder term r n satisfies r n log(log(η n /p n )) + o(), or equivalently we obtain that r n = o(log(η n /p n )), and hence (0). Iterating the first equality in (0) using (), we get further that β n = logη n log p n log(logη n log p n + r n ) + o() = logη n log p n log(logη n log p n ) + o(). Finally, the optimal value is E n G[L;A n ] = p ne β n p n e β n + qn p n e β n = η n logη n log p n ( + o()) η n log p n. Now we must verify that indeed limsup p n e β n = 0. Suppose that limsup p n e β n > 0, this implies that β nk δn k along a subsequence n k for δ > 0. But then we must have from (8) that η nk δ n k for some δ > 0, contradicting our assumption that η n = o(n). We therefore conclude the statement of our theorem.
5 3 OUR MAIN RESULT 3. A Large Deviations Rate Characterization In order to prove our main result we shall impose additional technical conditions. We assume that ψ G ( ) is steep in the sense that for all a (, ) there is θ a such that ψ G (θ a) = a. Under this assumption we have that P n G (A n ) = exp( nī G (A) + o(n)), where Ī G (A) = inf x A I G(x) = inf sup x A θ (θx ψ G (θ)). Similarly, for each H, we define ψ H (θ) := logeh exp(θx) and define I H (x) = sup θ (θx ψ H (θ)). Similarly as for the definition of Ī G (A), we write Ī H (A) = inf x A I H (x). Now we are ready to state our main result. Theorem 2 Let int (A) denote the interior of A. Suppose that Ī H (A) = Ī H (int (A)) (0, ) for any P H P for some feasible set P. We have Proof. lim n n log P H P,X i H The proof follows from a LD argument. First, lim inf n n log P H (A n ) = liminf n P H P H P,X i P H (A) = min Ī H (A) P H P P H P,X i P H n logp H(A n ) liminf n P H n logp H(A n ) P H P,X i min Ī H (int (A)) = min Ī H (A). P H P P H P Next, since A is closed, using Chebycheff inequality (as in the proof of Cramer s theorem; see p.27, Remark (c) in Dembo and Zeitouni (998)) gives and hence lim sup n n log P H (A n ) 2exp( nī H (A)) P H (A n ) min Ī H (A). P P H P H P H P,X i Combining the upper and lower bounds we get our conclusion. The significance of the previous result is that the optimization problem that must be solved can now be cast as a concave program and thus it is more tractable. In order to have a concrete class of examples, let us focus on the case in which A = [a, ). Really the key property that holds using this specific selection is that we can identify a specific element a A such that I H (a) = Ī H (A). We consider the general problem of finding the minimum rate function over a class of distributions, namely inf I H(a) = inf sup {θa ψ H (θ)}. (2) P H P P H P Lemma If P is a convex set, then the optimization program (2) is convex. Proof. The inner objective function in (2) is concave as a function of θ, since ψ H ( ) is convex. For the outer objective function, note that ψ H (θ) is concave in P H, and so θa ψ H (θ) is convex in P H. As the imum over a set of convex functions (indexed by θ), the outer objective function sup θ {θa ψ H (θ)} is also convex as a function of P H. Therefore both the inner and outer optimizations in (2) are convex programs. θ
6 3.2 Numerical Procedure For numerical procedure, in order to avoid the issue of optimization over infinite-dimensional variables, we concentrate on the case of discrete P H. For convenience, we write p = (p,..., p m ) as the weights over support points {x,...,x m }. Moreover, we write { } Z(p) = θ θa log as the outer objective in (2). Note that for discrete distribution p, the optimal solution of θ in (3) can be solved simply by finding the root of m i= p ix i e θx i m i= p ie θx i We now focus on min p P Z(p). Suppose that P is a feasible region dictated by KL-divergence constraint, i.e. P = {p 0 : m i= p i log(p i /p 0 i ) η, m i= p i = } for some baseline distribution p 0 = (p 0,..., p0 m). A particularly convenient procedure to approximate the optimal solution is to use conditional gradient (or Frank-Wolfe) method (Frank and Wolfe 956). This lies on the stepwise optimization, given the current solution p k = (p k,..., pk m) at step k, min p P Z(pk )(p p k ) (4) m i= = a p i e θx i This subroutine can be easily solved. In fact, we have ( ( ) Z(p k ) = d m log d p i p j e θ k x e θ k x i = j= j) pi m =p k j= p je θ k x j i by simple arithmetic or by the use of the envelope theorem, where θ k is the solution to For convenience, we let m i= pk i x ie θx i m i= pk i eθx i ξ i (p k ) = = a e θ k x i m j= pk j eθ k x j be the i-th coordinate of Z(p k ). The solution to (4) is given by q k+ = (q k+,...,q k+ m ), where and β < 0 satisfies the equation q k+ i = p0 i eβξ i(pk ) m j= p0 j eβξ j(p k ) m i= β p0 i ξ i(p k )e βξ i(p k ) log m j= p0 j eβξ j(p k ) m j= i p 0 je βξ j(p k) = η If there is no negative root to this equation, then q k+ is plainly a degenerate mass on argmin{ Z(p k )}. Therefore, the iterative procedure is the following: Iterative Procedure: Start from the baseline distribution p 0 (or any other distribution). At each iteration k, given p k, do the following: i (3)
7 . Compute the root θ that solves 2. Compute for i =,...,m. 3. Compute, if any, the negative root of m i= pk i x ie θx i m i= pk i eθx i ξ i = e θx i = a m j= pk j eθx j 4. If there is a negative root β, then m i= β p0 i ξ ie βξ i log m j= p0 j eβξ j m j= q k+ i = p0 i eβξ i m j= p0 j eβξ j p 0 je βξ j = η for i =,...,m. Otherwise q k+ i = if i = argmin{ξ i }, and 0 for all other i s. 5. Update p k+ = ( ε k+ )p k + ε k+ q k+ for some step size ε k+ (by using limited minimization rule or Armijo Rule; see p.27, Bertsekas (999)). 4 NUMERICAL EXPERIMENTS We will apply our algorithm in the case of two standard discrete distributions with finite support, namely the binomial distribution and a discrete distribution with random weights. We compare the performance of the upper bound with constraints with the bound without i.i.d constraints. Figure?? shows the log-probabilities associated with a binomial model with parameters m = 0, p = 0.5 as n increases. The true model is assumed to be binomial with m = 0, p = Figure?? shows the log-probabilities associated with a discrete distribution with random weights on the integer support from to 0, using MLE from a random sample as the base model. As can be seen, in both cases the upper bound with i.i.d constraint provides a much tighter bound to the real model than otherwise. Binomial Model log(probability) Upper Bound i.i.d constraints Real Model Base Model Upper Bound no constraints n Figure : Binomial
8 0 Random Weight Model 5 0 log(probability) Upper bound i.i.d constraints Real Model Base Model Upper bound no constraints n REFERENCES Figure 2: Random Weights Ben-Tal, A., D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen Robust solutions of optimization problems affected by uncertain probabilities. Management Science 59 (2): Bertsekas, D. P Nonlinear programming. Dembo, A., and O. Zeitouni Large deviations techniques and applications, Volume 2. Springer. Dupuis, P., and R. Ellis A Weak Convergence Approach to the Theory of Large Deviations. New York: John Wiley and Sons. Frank, M., and P. Wolfe An algorithm for quadratic programming. Naval Research Logistics Quarterly 3:95 0. Glasserman, P., and X. Xu Robust risk measurement and model risk. Quantitative Finance 4 (): Hansen, L., and T. Sargent Robustness. Princeton, NJ: Princeton University Press. Iyengar, G. N Robust dynamic programming. Mathematics of Operations Research 30 (2): Jain, A., A. E. Lim, and J. G. Shanthikumar On the optimality of threshold control in queues with model uncertainty. Queueing Systems 65 (2): Luenberger, D Optimization by Vector Space Methods. New York: John Wiley and Sons. Nilim, A., and L. El Ghaoui Robust control of Markov decision processes with uncertain transition matrices. Operations Research 53 (5): Petersen, I. R., M. R. James, and P. Dupuis Mini optimal control of stochastic uncertain systems with relative entropy constraints. Automatic Control, IEEE Transactions on 45 (3): AUTHOR BIOGRAPHIES JOSE BLANCHET is an Associate Professor in the Departments of IEOR and Statistics at Columbia University. Jose holds a Ph.D. in Management Science and Engineering from Stanford University. Prior to joining Columbia he was a faculty member in the Statistics Department at Harvard University. Jose is a recipient of the 2009 Best Publication Award given by the INFORMS Applied Probability Society and of the 200 Erlang Prize. He also received a PECASE award given by NSF in 200. He worked as an analyst in Protego Financial Advisors, a leading investment bank in Mexico. He has research interests in applied probability and Monte Carlo methods. He serves in the editorial board of Advances in Applied Probability, Journal of Applied Probability, Mathematics of Operations Research and QUESTA. His is.
9 CHRISTOPHER DOLAN is a PhD student in the Department of Statistics at Columbia University. He received his BA and MS in Mathematics from the Courant Institute at NYU. His research interests are in applied probability and rare event simulation. His address is dolan@stat.columbia.edu. HENRY LAM is an Assistant Professor in the Department of Mathematics and Statistics at Boston University. He graduated from Harvard University with a Ph.D. degree in statistics in 20. His research focuses on large-scale stochastic simulation, rare-event analysis, and simulation optimization, with application interests in service systems and risk management. His is khlam@bu.edu.
Quantifying Stochastic Model Errors via Robust Optimization
Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations
More informationProceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds.
Proceedings of the 015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. SIMULATING TAIL EVENTS WITH UNSPECIFIED TAIL MODELS Henry Lam
More informationProceedings of the 2014 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds.
Proceedings of the 2014 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds. RECONSTRUCTING INPUT MODELS VIA SIMULATION OPTIMIZATION Aleksandrina
More informationOptimal Transport in Risk Analysis
Optimal Transport in Risk Analysis Jose Blanchet (based on work with Y. Kang and K. Murthy) Stanford University (Management Science and Engineering), and Columbia University (Department of Statistics and
More informationRobust Dual-Response Optimization
Yanıkoğlu, den Hertog, and Kleijnen Robust Dual-Response Optimization 29 May 1 June 1 / 24 Robust Dual-Response Optimization İhsan Yanıkoğlu, Dick den Hertog, Jack P.C. Kleijnen Özyeğin University, İstanbul,
More informationProceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds.
Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. UNBIASED MONTE CARLO COMPUTATION OF SMOOTH FUNCTIONS OF EXPECTATIONS
More informationITERATIVE METHODS FOR ROBUST ESTIMATION UNDER BIVARIATE DISTRIBUTIONAL UNCERTAINTY
Proceedings of the 2013 Winter Simulation Conference R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, eds. ITERATIVE METHODS FOR ROBUST ESTIMATION UNDER BIVARIATE DISTRIBUTIONAL UNCERTAINTY Henry
More informationIntroduction to Rare Event Simulation
Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31
More informationAmbiguity Sets and their applications to SVM
Ambiguity Sets and their applications to SVM Ammon Washburn University of Arizona April 22, 2016 Ammon Washburn (University of Arizona) Ambiguity Sets April 22, 2016 1 / 25 Introduction Go over some (very
More informationOptimal Transport Methods in Operations Research and Statistics
Optimal Transport Methods in Operations Research and Statistics Jose Blanchet (based on work with F. He, Y. Kang, K. Murthy, F. Zhang). Stanford University (Management Science and Engineering), and Columbia
More informationConsistency of the maximum likelihood estimator for general hidden Markov models
Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models
More informationGaussian Estimation under Attack Uncertainty
Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary
More informationRobust Sensitivity Analysis for Stochastic Systems
Robust Sensitivity Analysis for Stochastic Systems Henry Lam Department of Mathematics and Statistics, Boston University, 111 Cummington Street, Boston, MA 02215 email: khlam@bu.edu We study a worst-case
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationProceedings of the 2018 Winter Simulation Conference M. Rabe, A. A. Juan, N. Mustafee, A. Skoogh, S. Jain, and B. Johansson, eds.
Proceedings of the 208 Winter Simulation Conference M. Rabe, A. A. Juan, N. Mustafee, A. Skoogh, S. Jain, and B. Johansson, eds. ACHIEVING OPTIMAL BIAS-VARIANCE TRADEOFF IN ONLINE DERIVATIVE ESTIMATION
More informationLarge Deviations for Weakly Dependent Sequences: The Gärtner-Ellis Theorem
Chapter 34 Large Deviations for Weakly Dependent Sequences: The Gärtner-Ellis Theorem This chapter proves the Gärtner-Ellis theorem, establishing an LDP for not-too-dependent processes taking values in
More informationClosest Moment Estimation under General Conditions
Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids
More informationGärtner-Ellis Theorem and applications.
Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationRobust Actuarial Risk Analysis
Robust Actuarial Risk Analysis Jose Blanchet Henry Lam Qihe Tang Zhongyi Yuan Abstract This paper investigates techniques for the assessment of model error in the context of insurance risk analysis. The
More informationSensitivity to Serial Dependency of Input Processes: A Robust Approach
Sensitivity to Serial Dependency of Input Processes: A Robust Approach Henry Lam Boston University Abstract We propose a distribution-free approach to assess the sensitivity of stochastic simulation with
More informationEFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS
Proceedings of the 2006 Winter Simulation Conference L. F. Perrone, F. P. Wieland, J. Liu, B. G. Lawson, D. M. Nicol, and R. M. Fujimoto, eds. EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationRefining the Central Limit Theorem Approximation via Extreme Value Theory
Refining the Central Limit Theorem Approximation via Extreme Value Theory Ulrich K. Müller Economics Department Princeton University February 2018 Abstract We suggest approximating the distribution of
More informationUniformly Efficient Importance Sampling for the Tail Distribution of Sums of Random Variables
Uniformly Efficient Importance Sampling for the Tail Distribution of Sums of Random Variables P. Glasserman Columbia University pg20@columbia.edu S.K. Juneja Tata Institute of Fundamental Research juneja@tifr.res.in
More informationChapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability
Probability Theory Chapter 6 Convergence Four different convergence concepts Let X 1, X 2, be a sequence of (usually dependent) random variables Definition 1.1. X n converges almost surely (a.s.), or with
More informationProceedings of the 2014 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds.
Proceedings of the 014 Winter Simulation Conference A. Tolk, S. D. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds. Rare event simulation in the neighborhood of a rest point Paul Dupuis
More informationPRIVATE RISK AND VALUATION: A DISTRIBUTIONALLY ROBUST OPTIMIZATION VIEW
PRIVATE RISK AND VALUATION: A DISTRIBUTIONALLY ROBUST OPTIMIZATION VIEW J. BLANCHET AND U. LALL ABSTRACT. This chapter summarizes a body of work which has as objective the quantification of risk exposures
More informationFinding a Needle in a Haystack: Conditions for Reliable. Detection in the Presence of Clutter
Finding a eedle in a Haystack: Conditions for Reliable Detection in the Presence of Clutter Bruno Jedynak and Damianos Karakos October 23, 2006 Abstract We study conditions for the detection of an -length
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationApproximation of Heavy-tailed distributions via infinite dimensional phase type distributions
1 / 36 Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions Leonardo Rojas-Nandayapa The University of Queensland ANZAPW March, 2015. Barossa Valley, SA, Australia.
More information1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner), Second Revised Edition, Springer-Verlag, New York, 2001.
Paul Dupuis Professor of Applied Mathematics Division of Applied Mathematics Brown University Publications Books: 1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner),
More informationInformation, Utility & Bounded Rationality
Information, Utility & Bounded Rationality Pedro A. Ortega and Daniel A. Braun Department of Engineering, University of Cambridge Trumpington Street, Cambridge, CB2 PZ, UK {dab54,pao32}@cam.ac.uk Abstract.
More informationCase study: stochastic simulation via Rademacher bootstrap
Case study: stochastic simulation via Rademacher bootstrap Maxim Raginsky December 4, 2013 In this lecture, we will look at an application of statistical learning theory to the problem of efficient stochastic
More information2 Chance constrained programming
2 Chance constrained programming In this Chapter we give a brief introduction to chance constrained programming. The goals are to motivate the subject and to give the reader an idea of the related difficulties.
More informationA PARAMETRIC MODEL FOR DISCRETE-VALUED TIME SERIES. 1. Introduction
tm Tatra Mt. Math. Publ. 00 (XXXX), 1 10 A PARAMETRIC MODEL FOR DISCRETE-VALUED TIME SERIES Martin Janžura and Lucie Fialová ABSTRACT. A parametric model for statistical analysis of Markov chains type
More informationGeneralized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses
Ann Inst Stat Math (2009) 61:773 787 DOI 10.1007/s10463-008-0172-6 Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Taisuke Otsu Received: 1 June 2007 / Revised:
More informationEFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences
Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS Jose Blanchet
More informationRobustness and duality of maximum entropy and exponential family distributions
Chapter 7 Robustness and duality of maximum entropy and exponential family distributions In this lecture, we continue our study of exponential families, but now we investigate their properties in somewhat
More informationNotes on Large Deviations in Economics and Finance. Noah Williams
Notes on Large Deviations in Economics and Finance Noah Williams Princeton University and NBER http://www.princeton.edu/ noahw Notes on Large Deviations 1 Introduction What is large deviation theory? Loosely:
More informationConjugate Predictive Distributions and Generalized Entropies
Conjugate Predictive Distributions and Generalized Entropies Eduardo Gutiérrez-Peña Department of Probability and Statistics IIMAS-UNAM, Mexico Padova, Italy. 21-23 March, 2013 Menu 1 Antipasto/Appetizer
More informationBROWNIAN BRIDGE HYPOTHESIS TESTING FOR THE INITIAL TRANSIENT PROBLEM
Proceedings of the 211 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. BROWNIAN BRIGE HYPOTHESIS TESTING FOR THE INITIAL TRANSIENT PROBLEM Peter W. Glynn
More informationLarge-deviation theory and coverage in mobile phone networks
Weierstrass Institute for Applied Analysis and Stochastics Large-deviation theory and coverage in mobile phone networks Paul Keeler, Weierstrass Institute for Applied Analysis and Stochastics, Berlin joint
More informationEconometrics I, Estimation
Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the
More informationDistributionally Robust Stochastic Optimization with Wasserstein Distance
Distributionally Robust Stochastic Optimization with Wasserstein Distance Rui Gao DOS Seminar, Oct 2016 Joint work with Anton Kleywegt School of Industrial and Systems Engineering Georgia Tech What is
More informationProceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.
Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. A SIMULATION-BASED COMPARISON OF MAXIMUM ENTROPY AND COPULA
More informationLECTURE 15: COMPLETENESS AND CONVEXITY
LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other
More informationMaximization of the information divergence from the multinomial distributions 1
aximization of the information divergence from the multinomial distributions Jozef Juríček Charles University in Prague Faculty of athematics and Physics Department of Probability and athematical Statistics
More informationOn deterministic reformulations of distributionally robust joint chance constrained optimization problems
On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,
More informationDistirbutional robustness, regularizing variance, and adversaries
Distirbutional robustness, regularizing variance, and adversaries John Duchi Based on joint work with Hongseok Namkoong and Aman Sinha Stanford University November 2017 Motivation We do not want machine-learned
More informationMismatched Multi-letter Successive Decoding for the Multiple-Access Channel
Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org
More informationHandout 8: Dealing with Data Uncertainty
MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite
More informationTesting Restrictions and Comparing Models
Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by
More informationAssessing financial model risk
Assessing financial model risk and an application to electricity prices Giacomo Scandolo University of Florence giacomo.scandolo@unifi.it joint works with Pauline Barrieu (LSE) and Angelica Gianfreda (LBS)
More informationThe properties of L p -GMM estimators
The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion
More informationUncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department
Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in
More informationMonotonicity and Aging Properties of Random Sums
Monotonicity and Aging Properties of Random Sums Jun Cai and Gordon E. Willmot Department of Statistics and Actuarial Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 E-mail: jcai@uwaterloo.ca,
More informationSurrogate loss functions, divergences and decentralized detection
Surrogate loss functions, divergences and decentralized detection XuanLong Nguyen Department of Electrical Engineering and Computer Science U.C. Berkeley Advisors: Michael Jordan & Martin Wainwright 1
More informationInformation Geometry
2015 Workshop on High-Dimensional Statistical Analysis Dec.11 (Friday) ~15 (Tuesday) Humanities and Social Sciences Center, Academia Sinica, Taiwan Information Geometry and Spontaneous Data Learning Shinto
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationA minimalist s exposition of EM
A minimalist s exposition of EM Karl Stratos 1 What EM optimizes Let O, H be a random variables representing the space of samples. Let be the parameter of a generative model with an associated probability
More informationU Logo Use Guidelines
Information Theory Lecture 3: Applications to Machine Learning U Logo Use Guidelines Mark Reid logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature of things. authenticity
More informationRandomized Algorithms for Semi-Infinite Programming Problems
Randomized Algorithms for Semi-Infinite Programming Problems Vladislav B. Tadić 1, Sean P. Meyn 2, and Roberto Tempo 3 1 Department of Automatic Control and Systems Engineering University of Sheffield,
More informationExpectation Propagation for Approximate Bayesian Inference
Expectation Propagation for Approximate Bayesian Inference José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department February 5, 2007 1/ 24 Bayesian Inference Inference Given
More informationSTATISTICAL CURVATURE AND STOCHASTIC COMPLEXITY
2nd International Symposium on Information Geometry and its Applications December 2-6, 2005, Tokyo Pages 000 000 STATISTICAL CURVATURE AND STOCHASTIC COMPLEXITY JUN-ICHI TAKEUCHI, ANDREW R. BARRON, AND
More informationC.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series
C.7 Numerical series Pag. 147 Proof of the converging criteria for series Theorem 5.29 (Comparison test) Let and be positive-term series such that 0, for any k 0. i) If the series converges, then also
More informationProceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds.
Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. STOCHASTIC OPTIMIZATION USING HELLINGER DISTANCE Anand N. Vidyashankar
More informationInfinitely Imbalanced Logistic Regression
p. 1/1 Infinitely Imbalanced Logistic Regression Art B. Owen Journal of Machine Learning Research, April 2007 Presenter: Ivo D. Shterev p. 2/1 Outline Motivation Introduction Numerical Examples Notation
More informationSeries 7, May 22, 2018 (EM Convergence)
Exercises Introduction to Machine Learning SS 2018 Series 7, May 22, 2018 (EM Convergence) Institute for Machine Learning Dept. of Computer Science, ETH Zürich Prof. Dr. Andreas Krause Web: https://las.inf.ethz.ch/teaching/introml-s18
More informationOther properties of M M 1
Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected
More informationSpring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM
University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 14 GEE-GMM Throughout the course we have emphasized methods of estimation and inference based on the principle
More informationStat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016
Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 1 Theory This section explains the theory of conjugate priors for exponential families of distributions,
More informationClosest Moment Estimation under General Conditions
Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest
More informationLikelihood robust optimization for data-driven problems
Comput Manag Sci (2016) 13:241 261 DOI 101007/s10287-015-0240-3 ORIGINAL PAPER Likelihood robust optimization for data-driven problems Zizhuo Wang 1 Peter W Glynn 2 Yinyu Ye 2 Received: 9 August 2013 /
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationVerifying Regularity Conditions for Logit-Normal GLMM
Verifying Regularity Conditions for Logit-Normal GLMM Yun Ju Sung Charles J. Geyer January 10, 2006 In this note we verify the conditions of the theorems in Sung and Geyer (submitted) for the Logit-Normal
More informationis a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.
Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable
More informationOn the Asymptotic Optimality of Empirical Likelihood for Testing Moment Restrictions
On the Asymptotic Optimality of Empirical Likelihood for Testing Moment Restrictions Yuichi Kitamura Department of Economics Yale University yuichi.kitamura@yale.edu Andres Santos Department of Economics
More informationDistributionally Robust Convex Optimization
Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationRandom Convex Approximations of Ambiguous Chance Constrained Programs
Random Convex Approximations of Ambiguous Chance Constrained Programs Shih-Hao Tseng Eilyan Bitar Ao Tang Abstract We investigate an approach to the approximation of ambiguous chance constrained programs
More informationProceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.
Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. OPTIMAL COMPUTING BUDGET ALLOCATION WITH INPUT UNCERTAINTY
More informationNonconcave Penalized Likelihood with A Diverging Number of Parameters
Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized
More informationStochastic Realization of Binary Exchangeable Processes
Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,
More informationReformulation of chance constrained problems using penalty functions
Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF
More informationLihua Sun. Department of Economics and Finance, School of Economics and Management Tongji University 1239 Siping Road Shanghai , China
Proceedings of the 20 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. OPTIMIZATION VIA SIMULATION USING GAUSSIAN PROCESS-BASED SEARCH Lihua Sun Department
More informationOrdinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods
Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Sandeep Juneja Tata Institute of Fundamental Research Mumbai, India joint work with Peter Glynn Applied
More informationChapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)
Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis
More informationOptimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints
Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,
More informationBandits : optimality in exponential families
Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities
More informationEXACT SAMPLING OF THE INFINITE HORIZON MAXIMUM OF A RANDOM WALK OVER A NON-LINEAR BOUNDARY
Applied Probability Trust EXACT SAMPLING OF THE INFINITE HORIZON MAXIMUM OF A RANDOM WALK OVER A NON-LINEAR BOUNDARY JOSE BLANCHET, Columbia University JING DONG, Northwestern University ZHIPENG LIU, Columbia
More informationNonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix
Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix Yingying Dong and Arthur Lewbel California State University Fullerton and Boston College July 2010 Abstract
More informationDistributionally Robust Discrete Optimization with Entropic Value-at-Risk
Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National
More informationData-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric
Data-Driven Distributionally Robust Chance-Constrained Optimization with asserstein Metric Ran Ji Department of System Engineering and Operations Research, George Mason University, rji2@gmu.edu; Miguel
More informationEstimating divergence functionals and the likelihood ratio by convex risk minimization
Estimating divergence functionals and the likelihood ratio by convex risk minimization XuanLong Nguyen Dept. of Statistical Science Duke University xuanlong.nguyen@stat.duke.edu Martin J. Wainwright Dept.
More informationMath Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT
Math Camp II Calculus Yiqing Xu MIT August 27, 2014 1 Sequence and Limit 2 Derivatives 3 OLS Asymptotics 4 Integrals Sequence Definition A sequence {y n } = {y 1, y 2, y 3,..., y n } is an ordered set
More informationUsing Markov Chains To Model Human Migration in a Network Equilibrium Framework
Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School
More informationδ -method and M-estimation
Econ 2110, fall 2016, Part IVb Asymptotic Theory: δ -method and M-estimation Maximilian Kasy Department of Economics, Harvard University 1 / 40 Example Suppose we estimate the average effect of class size
More informationInference in non-linear time series
Intro LS MLE Other Erik Lindström Centre for Mathematical Sciences Lund University LU/LTH & DTU Intro LS MLE Other General Properties Popular estimatiors Overview Introduction General Properties Estimators
More information