EXTENDING MODELING SYSTEMS: STRUCTURE AND SOLUTION

Size: px
Start display at page:

Download "EXTENDING MODELING SYSTEMS: STRUCTURE AND SOLUTION"

Transcription

1 EXTENDING MODELING SYSTEMS: STRUCTURE AND SOLUTION Michael C. Ferris Computer Sciences Department, University of Wisconsin, Madison, WI Steven P. Dirkse GAMS Corporation, 1217 Potomac Street, Washington, DC Jan Jagla GAMS Software GmbH, Eupener Str , Cologne, Germany Alexander Meeraus GAMS Corporation, 1217 Potomac Street, Washington, DC 2007 Abstract Keywords Some extensions of a modeling system are described that facilitate higher level structure identification within a model. These structures, which often involve complementarity relationships, can be exploited by modern large scale mathematical programg algorithms. A specific implementation of an extended mathematical programg tool is outlined that communicates structure in a computationally beneficial manner from the GAMS modeling system to an appropriate solver. Modeling, complementarity, nonlinear programg, variational inequalities Introduction Chemical engineering has been a user of optimization techniques and technology for several decades. Many chemical engineers have made significant contributions to the field of optimization, and in several cases have developed software that has had impact far beyond the chemical engineering discipline. Packages such as DICOPT (Duran and Grossman, 1986), BARON (Tawarmalani and Sahinidis, 2004), ALPHAECP (Westerlund et al., 1998) and IPOPT (Wächter and Biegler, 2006) have influenced the debate on tractability of hard, practical nonlinear optimization problems, setting the gold standard for the solution of nonconvex, global optimization problems. Accessing these solvers, and many of the other algorithms that have been developed over the past three decades has been made easier by the advent of modeling languages. A modeling language (Bisschop and Meeraus, 1982, Fourer et al., 1990) provides a natural, convenient way to represent mathematical programs. They typically have efficient procedures to handle vast amounts of data, take advantage of the numerous options for solvers and model types, and can quickly generate a large number of models. For this reason, modeling languages are heavily used in practical applications. Although, we will use GAMS (Brooke et al., 1988), the system we are intimately familiar with, most what will be said could as well be applied to other algebra based modeling systems like AIMMS, AMPL, MOSEL, MPL and OPL. We believe that further advancements in applications of optimization can be achieved via identification of specific problem structures within a model. In some cases, such structures can be automatically extracted from a model that is formulated in a modeling system, but in general it may be difficult to tease out particular structures from a large complex model. As a concrete example, we cite the domain of applied general equilibrium modeling within economics, where a model consists of a collection of interacting agents each of which maximizes some utility

2 function of some demanded goods, which in turn are generated from an optimization of production functions. Since current solvers require the (economic) modeler to formulate the problem as a complementarity problem or a system of nonlinear equations, the utility function is maximized (by hand) to derive demands which are normally rather complex nonlinear functions; similarly for the supplies that are derived from a cost imization involving the production functions. It is typically difficult to detere the structure of the underlying model from the resulting nonlinear complementarity system. This paper aims to alleviate this difficulty. The purpose of our work is to extend the classical nonlinear program from the traditional model: x f(x) s.t. g(x) 0, h(x) = 0, (1) where f, g and h are assumed sufficiently smooth, to a more general format that allows new constraint types to be specified precisely. Some extensions of this format have already been incorporated into modeling systems. There is support for integer, sos, and semicontinuous variables, and some limited support for logical constructs. GAMS (Ferris and Munson, 2000), AMPL (Ferris et al., 1999) and AIMMS have support for complementarity constraints, and there are some extensions that allow the formulation of second-order cone programs within GAMS. AMPL (Fourer et al., 1993) has facilities to model piecewise linear functions. Much of this development is tailored to specific structures. We aim to develop more general annotation schemes to allow extended mathematical programs to be written clearly and succinctly. In the following sections we outline some new features of GAMS that we refer to as extended mathematical programs (EMP), that incorporate many of the extensions menitoned above but also allows a variety of other structures to be described at the modeling level. We believe such extensions may have benefits on several levels. Firstly, we think this will make the modelers task easier, in that the model can be described more naturally and perhaps at a higher level. (Of course, there are several examples of this already in the literature including the use specialized languages such as MPSGE (Rutherford, 1999) to facilitate general equilibrium models, and specialized (graphical) interfaces to allow queueing system or process system design.) Secondly, the automatic generation of the model can be done more reliably based on techniques such as automatic differentiation and problem reformulation - duality constructs, or specific ways to write certain constraints. Thirdly, if an algorithm is given additional structure, it may be able to exploit that in an effective computational manner; knowing the problem is a cone program, or the problem involves the optimality conditions of a nonlinear program can be treated in a variety of different ways, some of which may be distinctly superior to others in certain settings. Indeed, the availability of such structure to a solver may well foster the generation of new features to existing solvers or drive the development of new classes of algorithms. Extended Mathematical Programs Complementarity Problems The necessary and sufficient optimality conditions for the linear program c T x x s.t. Ax b, x 0 are that x and some λ satisfy 0 c A T λ x 0 0 Ax b λ 0. (2) (3) Here, the sign signifies (for example) that in addition to the constraints 0 Ax b and λ 0, each of the products (Ax b) i λ i is constrained to be zero. An equivalent viewpoint is that either (Ax b) i = 0 or λ i = 0. Within GAMS, these constraints can be modeled simply as positive variables lambda, x; model complp / defd.x, defp.lambda /; where defp and defd are the equations that define primal and dual feasibility (Ax b, c A T λ) respectively. Other linear programs with specialized constraint structure are just as easy to specify. For example c T x x s.t. Ax = b, x [l, u] has similarly expressed optimality conditions: 0 (c A T λ) j if x j = l j 0 = (c A T λ) j if l j < x j < u j 0 (c A T λ) j if x j = u j 0 = Ax b λ free. (4) Note that the first three complementarity relationships in (4) can be written more succinctly as (c A T λ) x [l, u]. This is translated into GAMS syntax as follows:

3 variables lambda, x; x.lo(i) = l(i); x.up(i) = u(i); model complp / defd.x, defp.lambda /; Such a problem is an instance of a linear mixed complementarity problem, for which we use the acronym MCP. Note that the bounds on the variables x detere the nature of the relationship on c A T λ at the solution. (It is possible to introduce explicit multipliers on the constraints x l and x u, and to rewrite the optimality conditions in terms of x, λ, and these multipliers. The notation enables us to write these relationships much more succinctly.) Complementarity problems do not have to arise as the optimality conditions of a linear program; the optimality conditions of the nonlinear programs (1) are the following MCP: 0 = f(x) + λ T g(x) + µ T h(x) x free 0 g(x) λ 0 0 = h(x) µ free. (5) Many examples are no longer simply the optimality conditions of an optimization problem. A specific example arises in chemical phase equilibrium. In this setting, different conditions are satisfied at an equilibrium depending on whether we are in vapor, liquid or two-phase state. Letting α represent the fraction in vapor, the problem is to find f(α) α [0, 1] where f(α) = z i (x i K i x i ), x i = K i i α + 1 α, for given data K i and z i. Ferris and Pang (1997) catalogue a number of other applications both in engineering and economics that can be written in a similar format. It should be noted that robust large scale solvers exist for such problems; see for example Ferris and Munson (2000), where a description is given of the PATH solver. Mathematical Programs with Complementarity Constraints A mathematical program with complementarity constraints is the following: x R n,y Rmf(x, y) (6) s.t. g(x, y) 0 (7) 0 y h(x, y) 0. (8) The objective function (6) needs no further description, except to state that the solution techniques we are intending to apply require that f (g and h) are at least once differentiable, and for some solvers twice differentiable. The constraints (7) are intended to represent standard nonlinear programg constraints. Clearly, these could involve equalities with a slight increase in exposition complexity. The constraints that are of interest here are the complementarity constraints (8). Essentially, these are parametric constraints (parameterized by x) on the variable y, and encode the structure that y is a solution to the nonlinear complementarity problem defined by h(x, ). Within the GAMS modeling system, this can be written simply and directly as: model mpecmod / deff, defg, defh.y /; option mpec=nlpec; solve mpecmod using mpec imizing obj; Here it is assumed that the objective (6) is defined in the equation deff, the general constraints (7) are defined in defg and the function h is described by defh. The complementarity relationship is defined by the bounds on y and the orthogonality relationship shown in the model declaration using.. AMPL provides a slightly different but equivalent syntax for this, see Ferris et al. (1999). The problem is frequently called a mathematical program with complementarity constraints (MPCC). Some solvers can process complementarity constraints explicitly. In many cases, this is achieved by a reformulation of the constraints (8) into the classical form given within (1). Dirkse and Ferris (2008) outlines a variety of ways to carry this out, all of which have been encoded in a solver package called NLPEC. While there are large numbers of different reformulations possible, the following parametric approach, coupled with the use of the nonlinear programg solver CONOPT or SNOPT, has proven effective in a large number of applications: x R n,y R m,s Rmf(x, y) s.t. g(x, y) 0 s = h(x, y) y 0, s 0 y i s i µ, 1 = 1,..., m. Note that a series of approximate problems are produced, parameterized by µ > 0; each of these approximate problems have stronger theoretical properties than the problem with µ = 0 (Ralph and Wright, 2004). A solution procedure whereby µ is successively reduced can be implemented as a simple option file

4 to NLPEC, and this has proven remarkably effective. Further details can be found in the NLPEC documentation (Dirkse and Ferris, 2008). The approach has been used to effectively optimize the rig in a sailboat design (Wallace et al., 2006). It is also possible to generalize the above complementarity condition to a mixed complementarity condition; details can be found in Ferris et al. (2005). Underlying the NLPEC solver package is an automatic conversion of the original problem into a standard nonlinear program which is carried out at the scalar model level. The technology to perform this conversion forms the core of the codes that we use to implement the model extensions of the sequel. Variational Inequalities A variational inequality VI(F, X) is to find x X: F (x) T (z x) 0, for all z X. Here X is a closed (frequently assumed convex) set, defined for example as X = {x x 0, g(x) 0}. Note that the first-order (imum principle) conditions of a nonlinear program f(z) z X are precisely of this form with F (x) = f(x). For concreteness, these conditions are necessary and sufficient for optimality of the linear programg. Solving the linear program (2) is equivalent to solving the variational inequality given by F (x) = c, X = {x Ax b, x 0}. (9) In this case, F is simply a constant function. While there are a large number of instances of the problem that arise from optimization applications, there are many cases where F is not the gradient of any function f. For example, asymmetric traffic equilibrium problems have this format, where the asymmetry arises for example due to different costs associated with left or right hand turns. A complete treatment of the theory and algorithms in this domain can be found in Facchinei and Pang (2003). Variational inequalities are intimately connected with the concept of a normal cone to a set S, for which a number of authors have provided a rich calculus. Instead of overloading a reader with more notation, however, we simply refer to the seal work in this area, Rockafellar and Wets (1998). Such a problem can be reformulated as a complementarity problem by introducing multipliers λ on the constraints g: 0 F (x) + λ T g(x) x 0 0 g(x) λ 0. If X has a different representation, this construction would be modified appropriately. In the linear programg example above (9), these conditions are precisely those given as (3). When X is the nonnegative orthant, the VI is just an alternative way to state a complementarity problem. However, when X is a more general set, it may be possible to treat it differently than simply introducing multipliers, see for example Cao and Ferris (1996). For example when X is a polyhedral set, algorithms may wish to generate iterates via projection onto X. A simple example two dimensional may be useful. Let [ F (x) = x x 1 + x 2 3 ], X = {x 0 x 1 + x 2 1}, so that F is an affine function, but F is not the gradient of any function f : R 2 R. For this particular data, VI(F, X) has a unique solution x = (0, 1). Such a variational inequality can be described in GAMS via the model statement model vi / deff, defg /; combined with an annotation file that indicates certain equations are to be treated differently by the EMP tool. In this case, the empinfo file states that the model equations deff define a function F that is to be part of a variational inequality, while the equations defg define constraints on X. Details on specific syntax can be found in Jagla et al. (2008). Mathematical Programs with Equilibrium Constraints Mathematical programs with equilibrium constraints are a generalization of the aforementioned MPCC problem class. The difference is that the lower level problem, instead of being a complementarity problem, is now a variational inequality. Coupling the two approaches mentioned above, results in the ability to model and solve such problems. Bilevel Programs Heirarchical optimization has recently become important for a number of different applications. New codes are being developed that exploit this structure,

5 at least for simple heirarchies, and attempt to define and implement algorithms for their solution. The simplest case is that of bilevel programg, where an upper level problem depends on the solution of a lower level optimization. For example: x,y f(x, y) s.t. g(x, y) 0, y solves s v(x, s) s.t. h(x, s) 0. This problem can be reformulated as an MPEC by replacing the lower level optimization problem by its optimality conditions: x,y f(x, y) s.t. g(x, y) 0, y solves VI( s v(x, s), X), where X = {s h(x, s) 0}. This approach then allows such problems to be solved using the NLPEC code, for example. However, there are several possible deficiencies that should be noted. Firstly, the optimality conditions encompassed in the VI may not have a solution, or the solution may only be necessary (and not sufficient) for optimality. Secondly, the MPEC solver may only find local solutions to the problem. The quest for practical optimality conditions and robust global solvers remains an active area of research. Importantly, the EMP tool will provide the underlying structure of the model to a solver if these advances detere appropriate ways to exploit this. We can model this bilevel program in GAMS by model bilev /deff,defg,defv,defh/; along with some extra annotations to a subset of the model defining equations. Specifically, within an empinfo file we state that the bilevel problem involves the objective v which is to be imized subject to the constraints specified in defh. The specific syntax is described in Jagla et al. (2008). A point that has been glossed over here but which is described carefully in the user manual is the process whereby multiple lower level problems are specified. Extended Nonlinear Programs Optimization models have traditionally been of the form (1). Specialized codes have allowed certain problem structures to be exploited algorithmically, for example simple bounds on variables. In a series of papers, Rockafellar and colleagues have introduced the notion of extended nonlinear programg, where the (primal) problem has the form: f 0(x) + θ(f 1 (x),..., f m (x)). (10) x X In this setting, X is assumed to be a nonempty polyhedral set, and the functions f 0, f 1,..., f m are smooth. The function θ can be thought of as a generalized penalty function. Specifically, when θ has the following form θ(u) = sup{y T u k(y)}, (11) y Y we obtain a computationally exploitable and theoretically powerful framework. A key point for computation and modeling is that the function θ can be fully described by defining the set Y and the function k. Furthermore, as we show below, different choices lead to a rich variety of functions θ, many of which are extremely useful for modeling. In the above setting θ can take on the value of and may well be nonsmooth, but it is guaranteed to be convex (proper and lower semicontinuous when Y and k is smooth and convex). Furthermore, from a modeling perspective, an extended nonlinear program can be specified simply by defining the functions f 0, f 1,..., f m as is done already, with the additional issue of simply defining Y and k. Conceptually, this is not much harder that what is done already, but leads to significant enhancements to the types of models that are available. This paper outlines an approach to do this within the GAMS modeling system for a number of different choices of Y and k. The tool we provide works in this setting by providing a library of functions θ that specify a library of choices for k and Y. Once a modeler deteres which constraints are treated via which choice of k and Y, the tool automatically forms an equivalent variational inequality or complementarity problem. As we show later, there may be alternative formulations that are computationally more appealing; such reformulations can be generated using different options to our tool. Forms of θ The EMP tool makes this problem format available to users in GAMS. As special cases, we can model piecewise linear penalties, least squares and L 1 approximation problems, as well as the notion of soft

6 and hard constraints. We allow modelers to utilize cone constraints and pass on the underlying geometric structure to solvers. Particular examples show enormous promise both from a modeling and solution perspective. For ease of exposition, we now describe a subset of the types of functions θ that can be generated by particular choices of Y and k. In many cases, the function θ is separable, that is θ(u) = m θ i (u i ). i=1 so we can either specify θ i or θ itself. Extended nonlinear programs include the classical nonlinear programg form (1) as a special case. This follows from the observation that if K is a closed convex cone, and we let ψ K denote the indicator function of K defined by: ψ K (u) = { 0 if u K then (1) can be rewritten as: else, x f(x) + ψ K ((g(x), h(x)), K = R m {0} p, where m and p are the dimensions of g and h respectively and R m = {u R m u 0}. An elementary calculation shows that ψ K (u) = sup v K u T v, where K = { u u T v 0, v K } is the polar cone of the given cone K. Thus, when θ(u) = ψ K (u) we simply take k 0 and Y = K. (12) In our example, K = R m + R p. To some extent, this is just a formalism that allows us to claim the classical case as a specialization; however when we take the cone K to be more general than the polyhedral cone used above, we can generate conic programs (see below) for example. The second example involves a piecewise linear function θ: Formally, for u R, { ρu if u 0 θ(u) = σu else. In this case, simple calculations prove that θ has the form (11) for the choices: k 0 and Y = [σ, ρ]. The special case where σ = ρ results in θ(u) = ρ u. This allows us to model nonsmooth L 1 approximation problems. Another special case results from the choice of σ = 0, whereby θ(u) = ρ max{u, 0}. This formulation corresponds to a soft penalization on an inequality constraint, namely if θ(f 1 (x)) is used then nothing is added to the objective function if f 1 (x) 0, but ρf 1 (x) is added if the constraint f i (x) 0 is violated. Contrast this to the classical setting above, where is added to the objective if the inequality constraint is violated. It is interesting to see that truncating the set Y, which amounts to bounding the multipliers, results in replacing the classical constraint by a linearized penalty. The third example involves a more interesting choice of k. If we wish to replace the absolute value penalization given above by a quadratic penalization (as in classical least squares analysis), that is θ(u) = γu 2 then a simple calculation shows that we should take k(y) = 1 4γ y2 and Y = R. By simply specifying this different choice of k and Y we can generate such models easily and quickly within the modeling system: note however that the reformulation we would use in these two cases are very different as we shall explain in the simple example below. Furthermore, in many applications it has become popular to penalize violations using a quadratic penalty only within a certain interval, afterwards switching to a linear penalty (chosen to make the penalty fucntion θ continuously differentiable Huber (1981). That is:

7 γu 1 2 γ2 if u γ 1 i.e. θ(u) = 2 u2 if u [ γ, γ] γu 1 2 γ2 else. Such functions arise from quadratic k and simple bound sets Y. In particular, the function γβ 2 + ρ(u β) if u β θ(u) = γu 2 if u [α, β] γα 2 + σ(u α) else arises from the choice of k(y) = 1 4γ y2 and Y = [σ, ρ], with α = σ 2γ and β = ρ 2γ. The final example that we give is that of L penalization. This example is different to the examples given above in that θ is not separable. However, straightforward calculation can be used to show θ(u) = results from the choice of { k 0 and Y = max i=1,...,m u i y R m y 0, that is Y is the unit simplex. Underlying theory } m y i = 1, i=1 The underlying structure of θ leads to a set of extended optimality conditions and an elegant duality theory. This is based on an extended form of the Lagrangian: L(x, y) = f 0 (x) + m y i f i (x) k(y) i=1 x X, y Y Note that the Lagrangian L is smooth - all the nonsmoothness is captured in the θ function. The theory is an elegant combination of calculus arguments related to f i and its derivatives, and variational analysis for features related to θ. It is shown in Rockafellar (1993) that under a standard constraint qualification, the first-order conditions of (10) are precisely in the form of the following variational inequality: VI ([ x L(x, y) y L(x, y) ] ), X Y. (13) When X and Y are simple bound sets, this is simply a complementarity problem. Note that EMP exploits this result. In particular, if an extended nonlinear program of the form (10) is given to EMP, then the optimality conditions (13) are formed as a variational inequality problem and can be processed as outlined above. For a specific example, we cite the fact that if we use the (classical) choice of k and Y given in (12), then the optimality conditions of (10) are precisely the standard complementarity problem given as (5). While this is of interest, we believe that other choices of k and Y may be more useful and lead to models that have more practical significance. Under appropriate convexity assumptions on this Lagrangian, it can be shown that a solution of the VI (13) is a saddle point for the Lagrangian on X Y. Furthermore, in this setting, the saddle point generates solutions to the primal problem (10) and its dual problem: max g(y), y Y with no duality gap. A simple example where g(y) = inf L(x, y), x X As an example, consider the problem x 1,x 2,x 3 exp(x 1 ) + 5 log(x 1 ) max(x 2 2 2, 0) s.t. x 1 /x 2 = log(x 3 ), 3x 1 + x 2 5, x 1 0, x 2 0. In this problem, we would take X = { x R 3 3x 1 + x 2 5, x 1 0, x 2 0 }. The function θ essentially treats 3 separable pieces: f 1 (x) = log(x 1 ) 1, f 2 (x) = x 2 2 2, f 3 (x) = x 1 /x 2 log(x 3 ). A classical problem would force f 1 (x) = 0, f 2 (x) 0 and f 3 (x) = 0, while imizing f 0 (x) = exp(x 1 ). In our problem, we still force f 3 (x) = 0, but apply a (soft) least squares penalty on f 1 (x) and a smaller one-sided penalization on f 2 (x). The above formulation is nonsmooth due to the max term in the objective function; in practice we could replace this by: exp(x 1) + 5 log(x 1 ) w x 1,x 2,x 3,w s.t. x 1 /x 2 = log(x 3 ), 3x 1 + x 2 5, x 1 0, x 2 0 w x 2 2 2, w 0

8 and recover a standard form NLP. If the penalty on f 1 (x) would be replaced by a one-norm penalization (instead of least squares), we would have to play a similar game, moving the function f 1 (x) into the constraints and adding additional variable(s). To some extent, this seems unnatural - a modeler should be able to interchange the penalization without having to reformulate the problem from scratch. The proposed extended NLP would not be reformulated at all by the modeler, but allow all these generalized constraints to be treated in a similar manner within the modeling system. The actual formulation would take: where θ(u) = θ 1 (u 1 ) + θ 2 (u 2 ) + θ 3 (u 3 ) θ 1 (u 1 ) = 5u 2 1, θ 2 (u 2 ) = 2 max(u 2, 0), θ 3 (u 3 ) = ψ {0} (u 3 ). The discussion above allows us to see that Y = R [0, 2] R, k(y) = 1 20 y The corresponding Lagrangian is the smooth function: L(x, y) = f 0 (x) + 3 y i f i (x) k(y). i=1 The corresponding VI (13) can almost be formulated in GAMS (except that the linear constraint in X cannot be handled currently except by introducing a θ 4 (x)). Thus f 4 (x) = 3x 1 + x 2 5, θ 4 (u) = ψ R resulting in the following choices for Y and k: Y = R [0, 2] R R +, k(y) = 1 20 y Since X and Y are now simple bound sets, (13) is now a complementarity problem and can be solved for example using PATH. A simple empinfo file details the choices of Y and k from the implemented library. The full model and option files are available from the authors. Reformulation as a classical NLP Suppose θ(u) = sup{u T y 1 y Y 2 yt Qy, } for a polyhedral set Y R m and a symmetric positive semidefinite Q R m m (possibly Q = 0). Suppose further that X = {x Rx r}, Y = { y S T y s }, Q = DJ 1 D T, F (x) = (f 1 (x),..., f m (x)), where J is symmetric and positive definite (for instance J = I). Then the optimal solutions x of (10) are the x components of the optimal solutions ( x, z, w) to f 0 (x) + s T z wt Jw s.t. Rx r, z 0, F (x) Sz Dw = 0. The multiplier on the equality constraint in the usual sense is the multiplier associated with x in the extended Lagrangian for (10). (Note that a Cholesky factorization may be needed to detere D.) It may be better to solve this reformulated NLP than to solve (13). However, it is important that we can convey all types of nonsmooth optimization problems to a solver as smooth optimization problems, and hence it is important to communicate the appropriate structure to the solver interface. We believe that specifying Y and k is a theoretically sound way to do this. Another example showing formulation of an extended nonlinear program as a complementarity problem within GAMS can be found in Dirkse and Ferris (1995). Conic Programg A problem of significant recent interest involves conic constraints: x X pt x s.t. Ax b 0, x C, where C is a convex cone. Using the notation outlined above, this can be expressed as an EMP: x X pt x + ψ R m (Ax b) + ψ C (x) For specific cones such as the Lorentz (ice-cream) cone, or the rotated quadratic cone, there are efficient implementations of interior point algorithms for their solution (Andersen et al., 2003). It is also possible to reformulate the problem in the form (1). Annotating

9 the variables that must lie in a particular cone using a empinfo file allows solvers like MOSEK (Andersen and Andersen, 2000) to receive the problem as a cone program, while standard NLP solvers would see a reformulation of the problem as a nonlinear program. Details on this approach can be found in the EMP manual. Embedded Complementarity Systems A different type of embedded optimization model that arises frequently in applications is: x f(x, y) s.t. g(x, y) 0 ( λ 0) H(x, y, λ) = 0 ( y free) Note the difference here: the optimization problem is over the variable x, and is parameterized by the variable y. The choice of y is fixed by the (auxiliary) complementarity relationships depicted here by H. Within GAMS, this is modeled as: model emp /deff,defg,defh/; Again, so this model can be processed correctly as an EMP, the modeler provides additional annotations to the model defining equations in an empinfo file, namely that the function H that is defined in defh is complementary to the variable y (and hence the variable y is a parameter to the optimization problem), and furthermore that the dual variable associated with the equation defg in the optimization problem is one and the same as the variable λ used to define H. Armed with this additional information, the EMP tool automatically creates the following MCP: 0 = x L(x, y, λ) x free 0 λ L(x, y, λ) λ 0 0 = H(x, y, λ) y free, where the Lagrangian is defined as L(x, y, λ) = f(x, y) + λ T g(x, y). Perhaps the most popular use of this formulation is where competition is allowed between agents. A standard method to deal with such cases is via the concept of Nash Games. In this setting x is a Nash Equilibrium if x i arg x i X i l i (x i, x i, q), i I, where x i are other players decisions and the quantities q are given exogenously, or via complementarity: 0 H(x, q) q 0. This mechanism is extremely popular in economics, and Nash famously won the Nobel Prize for his contributions to this literature. This format is again an EMP, more general than the example given above in two respects. Firstly, there is more than one optimization problem specified in the embedded complementarity system. Secondly, the parameters in each optimization problem consist of two types. Firstly, there are the variables q that are tied down by the auxiliary complementarity condition and hence are treated as parameters by the ith Nash player. Also there are the variables x i that are treated as parameters by the ith Nash player, but are treated as variables by a different player j. While we do not specify the syntax here for these issues, the manual provides examples that outline how to carry out this matching within GAMS. Finally, two points of note: first it is clear that the resulting model is a complementarity problem and can be solved using PATH, for example. Secondly, perforg the conversion from an embedded complementarity system or a Nash Game automatically is a critical step in making such models practically useful. We note that there is a large literature on discretetime finite-state stochastic games: this has become a central tool in analysis of strategic interactions among forward-looking players in dynamic environments. The Ericson and Pakes (1995) model of dynamic competition in an oligopolistic industry is exactly in the format described above, and has been used extensively in applications such as advertising, collusion, mergers, technology adoption, international trade and finance. Conclusions A number of new modeling formats involving complementarity and variational inequalities have been described and a new tool, EMP, that allows such problems to be specified in the GAMS modeling systems has been outlined. We believe this will make a modelers task easier by allowing model structure to be described succinctly. Furthermore, model generation can be done more reliably and automatically, and algorithms can exploit model structure to improve solution speed and robustness. Acknowledgements This work is supported in part by Air Force Office of Scientific Research Grant FA , and National Science Foundation Grants DMI , DMS and IIS The authors wish to

10 thank Todd Munson and Tom Rutherford for insightful comments on the material presented here. References Andersen, E. D. and Andersen, K. D. (2000). The MOSEK interior point optimizer for linear programg: an implementation of the homogeneous algorithm. In H. Frank et al., editors, High Performance Optimization, pages Kluwer Academic Publishers, Dordrecht, The Netherlands. Andersen, E. D., Roos, C., and Terlaky, T. (2003). On implementing a primal-dual interior-point method for conic quadratic optimization. Mathematical Programg, 95(2): Bisschop, J. and Meeraus, A. (1982). On the development of a general algebraic modeling system in a strategic planning environment. Mathematical Programg Study, 20:1 29. Brooke, A., Kendrick, D., and Meeraus, A. (1988). GAMS: A User s Guide. The Scientific Press, South San Francisco, California. Cao, M. and Ferris, M. C. (1996). A pivotal method for affine variational inequalities. Mathematics of Operations Research, 21: Dirkse, S. and Ferris, M. (2008). NLPEC, User s Manual. Dirkse, S. P. and Ferris, M. C. (1995). MCPLIB: A collection of nonlinear mixed complementarity problems. Optimization Methods and Software, 5: Duran, M. A. and Grossman, I. E. (1986). An outer-approximation algorithm for a class of mixedinteger nonlinear programs. Mathematical Programg, 36: Ericson, R. and Pakes, A. (1995). Markov perfect industry dynamics: A framework for empirical analysis. Review of Economic Studies, 62: Facchinei, F. and Pang, J. S. (2003). Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer-Verlag, New York, New York. Ferris, M. C., Dirkse, S. P., and Meeraus, A. (2005). Mathematical programs with equilibrium constraints: Automatic reformulation and solution via constrained optimization. In Kehoe, T. J., Srinivasan, T. N., and Whalley, J., editors, Frontiers in Applied General Equilibrium Modeling, pages Cambridge University Press. Ferris, M. C. and Munson, T. S. (2000). Complementarity problems in GAMS and the PATH solver. Journal of Economic Dynamics and Control, 24: Ferris, M. C. and Pang, J. S. (1997). Engineering and economic applications of complementarity problems. SIAM Review, 39: Fourer, R., Gay, D. M., and Kernighan, B. W. (1990). A modeling language for mathematical programg. Management Science, 36: Fourer, R., Gay, D. M., and Kernighan, B. W. (1993). AMPL: A Modeling Language for Mathematical Programg. Duxbury Press, Pacific Grove, California. Huber, P. J. (1981). Robust Statistics. John Wiley & Sons, New York. Jagla, J., Meeraus, A., Dirkse, S., and Ferris, M. (2008). EMP, User s Manual. In preparation, Ralph, D. and Wright, S. J. (2004). Some properties of regularization and penalization schemes for MPECs. Optimization Methods and Software, 19(5): Rockafellar, R. T. (1993). Lagrange multipliers and optimality. SIAM Review, 35: Rockafellar, R. T. and Wets, R. J. B. (1998). Variational Analysis. Springer-Verlag. Rutherford, T. F. (1999). Applied general equilibrium modeling with MPSGE as a GAMS subsystem: An overview of the modeling framework and syntax. Computational Economics, 14:1 46. Tawarmalani, M. and Sahinidis, N. V. (2004). Global Optimization of Mixed Integer Nonlinear Programs: A Theoretical and Computational Study. Mathematical Programg, 99(3): Wächter, A. and Biegler, L. T. (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programg. Mathematical Programg, 106(1): Wallace, J., Philpott, A., O Sullivan, M., and Ferris, M. (2006). Optimal rig design using mathematical programg. In 2nd High Performance Yacht Design Conference, Auckland, February, 2006, pages Westerlund, T., Skrifvars, H., Harjunkoski, I., and Pörn, R. (1998). An extended cutting plane method for solving a class of non-convex lp problems. Computers and Chemical Engineering, 22(3): Ferris, M. C., Fourer, R., and Gay, D. M. (1999). Expressing complementarity problems and communicating them to solvers. SIAM Journal on Optimization, 9:

Extended Mathematical Programming

Extended Mathematical Programming Extended Mathematical Programming Michael C. Ferris Joint work with: Michael Bussieck, Steven Dirkse, and Alexander Meeraus University of Wisconsin, Madison Zinal, January 18, 2017 Ferris (Univ. Wisconsin)

More information

Nonlinear Complementarity Problems and Extensions

Nonlinear Complementarity Problems and Extensions Nonlinear Complementarity Problems and Extensions Michael C. Ferris Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI 53706 ferris@cs.wisc.edu http://www.cs.wisc.edu/

More information

An Extended Mathematical Programming Framework

An Extended Mathematical Programming Framework An Extended Mathematical Programming Framework Michael C. Ferris University of Wisconsin, Madison SIAM National Meeting, Denver: July 6, 2009 Michael Ferris (University of Wisconsin) EMP Denver, July 2009

More information

Extended Mathematical Programming

Extended Mathematical Programming Extended Mathematical Programming Michael C. Ferris University of Wisconsin, Madison Nonsmooth Mechanics Summer School, June 15, 2010 Ferris (Univ. Wisconsin) EMP Nonsmooth School, June 2010 1 / 42 Complementarity

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Coupled Optimization Models for Planning and Operation of Power Systems on Multiple Scales

Coupled Optimization Models for Planning and Operation of Power Systems on Multiple Scales Coupled Optimization Models for Planning and Operation of Power Systems on Multiple Scales Michael C. Ferris University of Wisconsin, Madison Computational Needs for the Next Generation Electric Grid,

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

Mathematical Programs with Equilibrium Constraints: Automatic Reformulation and Solution via Constrained Optimization

Mathematical Programs with Equilibrium Constraints: Automatic Reformulation and Solution via Constrained Optimization Mathematical Programs with Equilibrium Constraints: Automatic Reformulation and Solution via Constrained Optimization Michael C. Ferris Steven P. Dirkse Alexander Meeraus March 2002, revised July 2002

More information

Optimality conditions and complementarity, Nash equilibria and games, engineering and economic application

Optimality conditions and complementarity, Nash equilibria and games, engineering and economic application Optimality conditions and complementarity, Nash equilibria and games, engineering and economic application Michael C. Ferris University of Wisconsin, Madison Funded by DOE-MACS Grant with Argonne National

More information

Solving equilibrium problems using extended mathematical programming and SELKIE

Solving equilibrium problems using extended mathematical programming and SELKIE Solving equilibrium problems using extended mathematical programming and SELKIE Youngdae Kim and Michael Ferris Wisconsin Institute for Discovery University of Wisconsin-Madison April 30, 2018 Michael

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton

More information

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0.

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0. BOOK REVIEWS 169 BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 28, Number 1, January 1993 1993 American Mathematical Society 0273-0979/93 $1.00+ $.25 per page The linear complementarity

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

MOPEC: Multiple Optimization Problems with Equilibrium Constraints

MOPEC: Multiple Optimization Problems with Equilibrium Constraints MOPEC: Multiple Optimization Problems with Equilibrium Constraints Michael C. Ferris Joint work with Roger Wets University of Wisconsin Scientific and Statistical Computing Colloquium, University of Chicago:

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Modeling, equilibria, power and risk

Modeling, equilibria, power and risk Modeling, equilibria, power and risk Michael C. Ferris Joint work with Andy Philpott and Roger Wets University of Wisconsin, Madison Workshop on Stochastic Optimization and Equilibrium University of Southern

More information

Solving a Signalized Traffic Intersection Problem with NLP Solvers

Solving a Signalized Traffic Intersection Problem with NLP Solvers Solving a Signalized Traffic Intersection Problem with NLP Solvers Teófilo Miguel M. Melo, João Luís H. Matias, M. Teresa T. Monteiro CIICESI, School of Technology and Management of Felgueiras, Polytechnic

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Primal-Dual Bilinear Programming Solution of the Absolute Value Equation

Primal-Dual Bilinear Programming Solution of the Absolute Value Equation Primal-Dual Bilinear Programg Solution of the Absolute Value Equation Olvi L. Mangasarian Abstract We propose a finitely terating primal-dual bilinear programg algorithm for the solution of the NP-hard

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

How to Take the Dual of a Linear Program

How to Take the Dual of a Linear Program How to Take the Dual of a Linear Program Sébastien Lahaie January 12, 2015 This is a revised version of notes I wrote several years ago on taking the dual of a linear program (LP), with some bug and typo

More information

Extended Nonlinear Programming

Extended Nonlinear Programming Nonlinear Optimization and Applications 2, pp. 381 399 G. Di Pillo and F. Giannessi, Editors Kluwer Academic Publishers B.V., 1999 Extended Nonlinear Programming R. T. Rockafellar (rtr@math.washington.edu)

More information

Stochastic Equilibrium Problems arising in the energy industry

Stochastic Equilibrium Problems arising in the energy industry Stochastic Equilibrium Problems arising in the energy industry Claudia Sagastizábal (visiting researcher IMPA) mailto:sagastiz@impa.br http://www.impa.br/~sagastiz ENEC workshop, IPAM, Los Angeles, January

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 Variational Inequalities Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 c 2002 Background Equilibrium is a central concept in numerous disciplines including economics,

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Solving Multi-Leader-Common-Follower Games

Solving Multi-Leader-Common-Follower Games ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Solving Multi-Leader-Common-Follower Games Sven Leyffer and Todd Munson Mathematics and Computer Science Division Preprint ANL/MCS-P1243-0405

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

Risk-Averse Second-Best Toll Pricing

Risk-Averse Second-Best Toll Pricing Risk-Averse Second-Best Toll Pricing Xuegang (Jeff) Ban, Department of Civil and Environmental Engineering, Rensselaer Polytechnic Institute, banx@rpi.edu, Corresponding Author Shu Lu, Department of Statistics

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Pattern Classification, and Quadratic Problems

Pattern Classification, and Quadratic Problems Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Solving MPECs Implicit Programming and NLP Methods

Solving MPECs Implicit Programming and NLP Methods Solving MPECs Implicit Programming and NLP Methods Michal Kočvara Academy of Sciences of the Czech Republic September 2005 1 Mathematical Programs with Equilibrium Constraints Mechanical motivation Mechanical

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

Lecture 3 Interior Point Methods and Nonlinear Optimization

Lecture 3 Interior Point Methods and Nonlinear Optimization Lecture 3 Interior Point Methods and Nonlinear Optimization Robert J. Vanderbei April 16, 2012 Machine Learning Summer School La Palma http://www.princeton.edu/ rvdb Example: Basis Pursuit Denoising L

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Quadratic Support Functions and Extended Mathematical Programs

Quadratic Support Functions and Extended Mathematical Programs Quadratic Support Functions and Extended Mathematical Programs Michael C. Ferris Olivier Huber University of Wisconsin, Madison West Coast Optimization Meeting University of Washington May 14, 2016 Ferris/Huber

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2

AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2 AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2 Abstract. We propose an exact penalty approach to solve the mathematical problems with equilibrium

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

MOPEC: multiple optimization problems with equilibrium constraints

MOPEC: multiple optimization problems with equilibrium constraints MOPEC: multiple optimization problems with equilibrium constraints Michael C. Ferris Joint work with: Michael Bussieck, Steven Dirkse, Jan Jagla and Alexander Meeraus Supported partly by AFOSR, DOE and

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines Gautam Kunapuli Example: Text Categorization Example: Develop a model to classify news stories into various categories based on their content. sports politics Use the bag-of-words representation for this

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

Radial Subgradient Descent

Radial Subgradient Descent Radial Subgradient Descent Benja Grimmer Abstract We present a subgradient method for imizing non-smooth, non-lipschitz convex optimization problems. The only structure assumed is that a strictly feasible

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

3.7 Constrained Optimization and Lagrange Multipliers

3.7 Constrained Optimization and Lagrange Multipliers 3.7 Constrained Optimization and Lagrange Multipliers 71 3.7 Constrained Optimization and Lagrange Multipliers Overview: Constrained optimization problems can sometimes be solved using the methods of the

More information

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Identifying Active Constraints via Partial Smoothness and Prox-Regularity Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,

More information

LECTURE 13 LECTURE OUTLINE

LECTURE 13 LECTURE OUTLINE LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

GLOBAL OPTIMIZATION WITH GAMS/BARON

GLOBAL OPTIMIZATION WITH GAMS/BARON GLOBAL OPTIMIZATION WITH GAMS/BARON Nick Sahinidis Chemical and Biomolecular Engineering University of Illinois at Urbana Mohit Tawarmalani Krannert School of Management Purdue University MIXED-INTEGER

More information

A ten page introduction to conic optimization

A ten page introduction to conic optimization CHAPTER 1 A ten page introduction to conic optimization This background chapter gives an introduction to conic optimization. We do not give proofs, but focus on important (for this thesis) tools and concepts.

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

Economic Foundations of Symmetric Programming

Economic Foundations of Symmetric Programming Economic Foundations of Symmetric Programming QUIRINO PARIS University of California, Davis B 374309 CAMBRIDGE UNIVERSITY PRESS Foreword by Michael R. Caputo Preface page xv xvii 1 Introduction 1 Duality,

More information

Convex relaxations of chance constrained optimization problems

Convex relaxations of chance constrained optimization problems Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011

More information

Using Economic Contexts to Advance in Mathematics

Using Economic Contexts to Advance in Mathematics Using Economic Contexts to Advance in Mathematics Erik Balder University of Utrecht, Netherlands DEE 2013 presentation, Exeter Erik Balder (Mathematical Institute, University of Utrecht)using economic

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Solving a Class of Generalized Nash Equilibrium Problems

Solving a Class of Generalized Nash Equilibrium Problems Journal of Mathematical Research with Applications May, 2013, Vol. 33, No. 3, pp. 372 378 DOI:10.3770/j.issn:2095-2651.2013.03.013 Http://jmre.dlut.edu.cn Solving a Class of Generalized Nash Equilibrium

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Computing risk averse equilibrium in incomplete market. Henri Gerard Andy Philpott, Vincent Leclère

Computing risk averse equilibrium in incomplete market. Henri Gerard Andy Philpott, Vincent Leclère Computing risk averse equilibrium in incomplete market Henri Gerard Andy Philpott, Vincent Leclère YEQT XI: Winterschool on Energy Systems Netherlands, December, 2017 CERMICS - EPOC 1/43 Uncertainty on

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

A Test Set of Semi-Infinite Programs

A Test Set of Semi-Infinite Programs A Test Set of Semi-Infinite Programs Alexander Mitsos, revised by Hatim Djelassi Process Systems Engineering (AVT.SVT), RWTH Aachen University, Aachen, Germany February 26, 2016 (first published August

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information