Solution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP

Size: px
Start display at page:

Download "Solution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP"

Transcription

1 Solution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP L. Fernandes A. Friedlander M. Guedes J. Júdice Abstract This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper. Keywords: Global optimization, linear complementarity problems, bilinear programming, box constrained optimization. 1 Introduction The General Linear Complementarity Problem (GLCP) consists of finding vectors z IR n and y IR l such that q + Mz + Ny 0 (1) p + Rz + Sy 0 (2) z 0, y 0, z T (q + Mz + Ny) = 0 (3) where M, N, R and S are given matrices of orders n n, n l, m n and m l respectively and q IR n, p IR m are given vectors. The GLCP has been studied by many authors as an ingredient for solving some optimization problems [16, 17, 18, 21, 22, 23, 24, 30, 32, 37. This problem is a generalization of the well-known Linear Complementarity Problem (LCP) w = q + Mz, z 0, w 0, z T w = 0 (4) as it reduces to this latter problem when the variables y i and the constraints (2) do not exist. It is known that the LCP can be solved in polynomial-time when M is a positive semi-definite (PSD) matrix, that is, if M satisfies x T Mx 0 for all x IR n [26, 35. On the other hand, it was shown in [25 that the GLCP is NP-hard when M is a PSD matrix. However, the GLCP can be solved in polynomial-time if M is a PSD matrix and R = 0 in its constraint (2) [25, 43. In this paper we denote this latter type of problem by PGLCP. This PGLCP is an important tool for finding global minima of bilinear programming problems (BLP), as any BLP can be cast as the problem of minimizing a linear function on a PGLCP [21, 24. Support of this work has been provided by project PRAXIS XXI 2/2.1/MAT/346/94, Portugal. Escola Superior de Tecnologia de Tomar, 2300 Tomar, Portugal. Departamento de Matemática, Universidade de Campinas, Campinas, Brasil. Faculdade de Ciencias, Universidade do Porto, 4000 Porto, Portugal Departamento de Matemática, Universidade de Coimbra, 3000 Coimbra, Portugal. 1

2 Recently, many authors have investigated the solution of linear and nonlinear complementarity problems by finding stationary points of differentiable and nondifferentiable functions under linear constraints [8, 9, 11, 12, 15, 31, 34, 41. Among many results of this type, the following quadratic program has been associated with the LCP Min z T (q + Mz) subject to q + Mz 0 z 0 It has been shown [8 that a solution of the LCP can be found by computing a stationary point of this quadratic program provided M is a Row Sufficient (RS) matrix, that is, if the following implication holds x i (M T x) i 0 for all i = 1,..., n x i (M T x) i = 0 for all i = 1,..., n (6) This result has been extended to the GLCP under the same hypothesis on the matrix M and R = 0 [25. It is important to add that any PSD matrix is also RS, whence this result has applications on the solution of PGLCPs and bilinear programs. Due to the large variety of efficient algorithms for nonlinear programs with simple bounds, there has been great effort on finding merit functions for which their stationary points on these simple sets lead into solutions of the LCP [9, 11, 15, 34. In this paper we extend these results and introduce the following merit function for the GLCP with R = 0 n φ(z, y, w, v) = w q Mz Ny 2 + v p Sy 2 + ( (z i w i ) g ) h (7) where denotes the euclidean norm and g 1, h 1 are real numbers such that g > 1 if h = 1. We show that any stationary point of this function on the set defined by zero lower-bounds on the variables is a solution of the GLCP provided M is a RS matrix. As stated before, any bilinear program can be reduced to the minimization of a linear function on a PGLCP [21, 24. We denote this problem by MINPGLCP. Hence the result mentioned before seems to have important applications on the solution of bilinear programs. On the other hand, it has been shown recently that a LCP can be transformed into a bilinear program [2, 33. By using this reduction, we prove that any LCP is equivalent to a PGLCP with a further condition on one of its variables. As we explain later in this paper, we believe that these results may have important implications on the solution of bilinear programs and NP-hard LCPs. It seems possible that the combination of a local search method for finding a stationary point of the merit function (7) on the set defined by zero lower bounds on the variables together with some heuristic procedure to move from one stationary point to other, will be able to compute global minima of the bilinear program and solutions of the LCP in a reasonable amount of work. The structure of the paper is as follows. In section 2 we establish our main result that associates the GLCP with stationary points of the function (7) on a set defined by zero lower bounds on the variables. The use of this result in bilinear programs and LCPs is discussed in sections 3 and 4. Some numerical experience with PGLCPs associated with bilinear programs and LCPs is presented in section 5. Finally, some conclusions are drawn in the last section of the paper. 2 PGLCP and stationary points of the merit function Consider again the GLCP defined by (1), (2) and (3) with R = 0. By introducing the slack variables w i and v i for the linear constraints (1) and (2), we can write the GLCP in the form i=1 (5) w = q + Mz + Ny (8) v = p + Sy (9) z 0, y 0, w 0, v 0 (10) z T w = 0 (11) 2

3 Let K be the feasible set consisting of the linear constraints (8), (9) and (10). As stated before, consider the following nonlinear program (NLP) f(z, y, w, v) = w q Mz Ny 2 + v p Sy 2 + ( n i=1 (z iw i ) g ) h subject to z, w, y, v 0 (12) where denotes the euclidean norm and g, h 1 are real numbers such that g > 1 if h = 1. Then we can establish the following property. Theorem 1 If K and M is a RS matrix, then any stationary point of NLP is a solution of the GLCP (8) (11). Proof: If ( z, w, ȳ, v) is a stationary point of the NLP (12), then there exist Lagrange multipliers ᾱ IR n, β IR n, γ IR l and µ IR m such that ( z, w, ȳ, v, ᾱ, β, γ, µ) satisfies the following conditions: α k = 2[w q Mz Ny k + gh[ β k = 2[M T (w q Mz Ny) k + gh[ n i=1 n i=1 (z i w i ) g h 1 z g k wg 1 k, k = 1,..., n (13) (z i w i ) g h 1 z g 1 k w g k, k = 1,..., n (14) γ = 2N T (w q Mz Ny) 2S T (v p Sy) (15) µ = 2(v p Sy) (16) z, w, y, v, α, β, γ, µ 0 (17) α k w k = z k β k = 0, k = 1,..., n (18) y T γ = µ T v = 0 (19) Let Then by (13) and (14) θ = w q Mz Ny IR n n η = gh[ (z i w i ) g h 1 IR 1 i=1 θ k = 1 2 [α k ηz g k wg 1 k (M T θ) k = 1 2 [β k ηz g 1 k w g k for k = 1,..., n. Hence for each k = 1,..., n we have θ k (M T θ) k = 1 4 [α kβ k + η 2 (z k w k ) 2g 1 ηz g 1 k w g 1 k (w k α k + z k β k ) (20) Now, by (18), w k α k +z k β k = 0 for each k = 1,..., n. Furthermore, all the variables are nonnegative by (17) and this implies θ k (M T θ) k 0 for all k = 1,..., n. Since M is a RS matrix then θ k (M T θ) k = 0 for all k = 1,..., n. Then by (20), we have and α k β k + η 2 (z k w k ) 2g 1 = 0 α k β k = z k w k = 0 for all k = 1,..., n. 3

4 Since g, h 1 and g > 1 if h = 1, then z k w k = 0 for all k = 1,..., n, implies n [ (z i w i ) g h 1 z g 1 k w g 1 k = 0 for all k = 1,..., n. i=1 Hence (13) and (14) take the form α k = 2[w q Mz Ny k, β k = 2[M T (w q Mz Ny) k Therefore the conditions (13) (19) for a stationary point of the NLP can be rewritten as follows α = 2(w q Mz Ny) β = 2M T [w q Mz Ny γ = 2N T [w q Mz Ny 2S T [v p Sy µ = 2(v p Sy) z, w, y, v, α, β, γ, µ 0 α T w = z T β = y T γ = µ T v = 0 But these are the necessary and sufficient optimality conditions for the convex quadratic program w q Mz Ny 2 + v p Sy 2 subject to w, y, z, v 0 Since the constraint set of the GLCP is nonempty, this quadratic program has an optimal solution with value zero. Due to the equivalence between the conditions (21) and the optimal solution of the quadratic program (22), the stationary point ( z, ȳ, w, v) of NLP satisfies w q Mz Ny = 0 v p Sy = 0 w, z, y, v 0 But we have shown before that ( z, ȳ, w, v) also satisfies z T w = 0. So ( z, ȳ, w, v) is a solution of the GLCP and this proves the theorem. It is important to add that the merit function (12) can be seen as an extension of two merit functions that have been discussed before for the LCP. In fact, by fixing g = 2 and h = 1 we get the so-called Natural Merit Function n φ 1 (z, w, y, v) = w q Mz Ny 2 + v p Sy 2 + zi 2 wi 2 (23) This function is an extension for the GLCP of the function n φ 1 (z, w) = w q Mz 2 + zi 2 wi 2 that has been used by many authors for the solution of the LCP [34, 40. Theorem 1 implies that any stationary point of the program i=1 φ 1 (z, w) subject to z 0, w 0 is a solution of the LCP provided the LCP is feasible and M is RS matrix. This property extends for the RS matrices the results established in [34. On the other hand, if g = 1, we get the function i=1 (21) (22) φ 2 (z, w, y, v) = w q Mz Ny 2 + v p Sy 2 + (z T w) h (24) that has been introduced in [15 for the LCP. Therefore theorem 1 extends for the GLCP the result presented in [15. Since any PSD matrix is also a RS matrix, then the result mentioned in this section is also valid for the PGLCP. In section 4 we report some computational experience on the solution of PGLCPs with PSD matrices M by finding stationary points of the merit functions φ 1 and φ 2. 4

5 3 Application to Bilinear Programming The Bilinear Programming Problem (BLP) is usually stated in the following form c T x + d T y + x T Hy subject to Ax a By b x 0 y 0 (25) where x IR n 1, y IR n 2 and H IR n 1 n 2 is in general a rectangular matrix. It follows from the definition of the BLP that the variables x and y belong to two disjoint constraint sets K x = {x IR n 1 : Ax a, x 0} K y = {y IR n 2 : By b, y 0} (26) where a IR m 1, b IR m 2, A IR m 1 n 1 and B IR m 2 n 2. By this reason this bilinear program is usually called Disjoint and differs from the so-called Jointly Bilinear Program, in which the x i and y i variables appear together in at least one constraint [3. The BLP has been studied by many authors in the past several years [3, 18, 19, 21, 24, 27, 29, 38. A number of important applications of this problem has appeared in the literature [27 and many algorithms have been designed for finding a stationary point or a global minimum of the BLP [18, 19, 21, 24, 29, 38. Despite its simplicity, even the problem of finding a stationary point for a BLP is considered to be NP-hard [36. In this section we discuss the solution of the BLP by exploiting its reduction to a nonconvex problem consisting of the minimization of a linear function on a PGLCP, that is denoted by MINPGLCP. In order to get this nonconvex problem, we first rewrite the BLP in the following equivalent form y {d T y + min x {(c + Hy) T x : x K x } : y K y } By applying the duality theory of linear programming to the inner linear program in the variables x, it is not difficult to show [24 that if the BLP has a global minimum then such a point is also the global minimum of the following nonconvex program d T y + a T u subject to c A T u + Hy 0 a + Ax 0 u T (Ax a) = 0 x T (c A T u + Hy) = 0 b + By 0 x 0, y 0, u 0 By introducing the slack variables for the inequality constraints, we can rewrite this MINPGLCP in the following form [ 0 a T [ x u + d T y subject to [ w 1 w 2 = [ c a [ 0 A T + A 0 [ x u [ H + 0 y v = b By [ [ w 1 x w 2 0, u [ w 1 w 2 T [ x u = 0 0, y 0, v 0 (27) 5

6 Hence the BLP reduces to the minimization of a linear function on a constraint set consisting of a PGLCP with a PSD matrix [ 0 A T M = A 0 for the complementary variables w = [w 1 w 2 T and z = [x u T. The equivalence between a BLP and the MINPGLCP (27) has suggested a sequential procedure for finding a global minimum of the BLP by exploiting a finite number of solutions of the PGLCP in such a way that the value of the linear function always decreases. An algorithm based on this idea has been designed in [22 and has been applied to BLPs and some others nonconvex problems with some success [21, 22, 23. A drawback of this approach is that, apart from the first, all the GLCPs are NP-hard and there exists no direct or iterative algorithm to process them efficiently. In fact, the authors have used an enumerative algorithm [1, 20 to compute solutions of all the GLCPs required by the sequential procedure [21, 22, 23. A nonenumerative algorithm for finding the global minimum for the MINPGLCP (27) by exploiting solutions of the PGLCP has still to be designed. By theorem 1 such solutions can be found by computing stationary points of the nonlinear program discussed in the previous section. We recall that there exists quite efficient software for finding such stationary points, since the constraints are simple lower bounds on the variables [4, 5, 6, 7, 10, 14, 34. As stated before, there are some other important optimization problems that are related with the BLP. The Concave Quadratic Program (CQP) is certainly one of the most revelant of these problems. We recall that a CQP is defined as follows subject to 2c T x + x T Hx Ax b x 0 where c IR n, b IR m, A IR m n and H is a PSD matrix of order n n. As is discussed in [28, by duplicating the number of variables, it is possible to reduce the CQP into the following BLP subject to c T x + c T y + x T Hy Ax b, Ay b x 0, y 0 The Concave Quadratic Programming Problem has found many applications in different areas. Among them, the problem of finding a feasible solution of a Zero-One Integer Program should be mentioned. This problem is defined as follows, find x and y such that Ax + By b y 0, x i {0, 1}, i = 1,..., n where b IR m, A IR m n and B IR m l are given vector and matrices. obviously equilavent to the following CQP x T (e x) subject to Ax + By b 0 x e, y 0 (28) (29) (30) This problem is where e is a vector of ones of order n. The so-called knapsack problem is an important example of (30) and is defined by a T x = b 0 (32) x i {0, 1}, i = 1,..., n where a IR n and b 0 IR 1 are given. This problem can be cast in the form subject to (31) e T x x T x 0 x e a T (33) x b 0 a T x b 0 6

7 As stated before, this CQP is equivalent to the following BLP Then this BLP can be reduced to a MINPGLCP of the form e T x + e T y x T y subject to 0 x e 0 y e a T x b 0 a T (34) y b 0 a T x b 0 a T y b 0 subject to c T z + d T y w = q + Mz + Ny v = p + Sy v, w, z, y 0 z T w = 0 where M is a PSD matrix. We have then shown in this section that any bilinear program can be reduced into a MINPGLCP of the form (35), where M is a PSD matrix. Concave Quadratic Programs and the problem of finding a feasible zero-one integer solution also reduce to MINPGLCPs by exploiting their equivalences to a BLP. So all these problems can be solved by finding a finite number of solutions of the PGLCP, which can be done by computing stationary points of the nonlinear program (12). In section 5 we report some computational experience on solving CQPs and knapsack problems by using the merit functions mentioned before. 4 Application to the LCP As stated before, the Linear Complementarity Problem (LCP) consists of finding vectors z IR n and w IR n such that w = q + Mz, z 0, w 0, z T w = 0 (36) where q IR n and M IR n n are given vector and matrix. It is known that the complexity of the solution of this problem is related with the class of the matrix M [8, 35. If M is a RS matrix, there is a number of direct and iterative algorithms that process efficiently the LCP [8. Furthermore the LCP can be solved in polynomial time if M is a PSD matrix [26, 35, 42. In general the LCP is a NP-hard problem [18, 35 and only an enumerative algorithm is able to find a solution or to show that none exists [1, 20, 39. The complexity of the LCP has motivated the search for techniques that exploit its reduction into a global optimization problem. The easiest of these forms is to put the nonlinear constraint z T w = 0 in an objective function and get the following Joint Bilinear Program (JBLP) z T w (35) subject to w = q + Mz (37) z 0, w 0 It is then obvious that a LCP has a solution if and only if this JBLP has a global minimum with zero value. An enumerative algorithm has been proposed in [1, 20 for solving the LCP by finding a feasible solution of this JBLP with zero objective value. As is reported in [20, the incorporation of some efficient heuristics and a quadratic solver for finding stationary points of special quadratic programs has made this enumerative algorithm an interesting technique for processing difficult LCPs. Recently a Disjoint Bilinear Programming formulation of the LCP has received some interest [2, 33. This formulation is obtained in two stages. First, the LCP is rewritten as the following Augmented LCP [ [ [ [ α e 0 I z ALCP : = + β q M 0 x [ [ [ T [ (38) z α z α, 0, = 0 x β x β 7

8 where I is the identity matrix of order n and e is a vector of ones of order n. Now, moving the nonlinear constraint z T α + x T β = 0 into an objective function, allows the ALCP to be written as the following Disjoint BLP subject to e T z + q T x + x T (M I)z Mz q 0 x e z 0 Furthermore the LCP has a solution if and only if the optimal value of the BLP is zero. As discussed in the previous section, we can transform this BLP into the following MINPGLCP subject to [ e[ T z e T [ u [ [ w q 0 I x M I = + + β e I 0 u 0 α = [ q [ + Mz x w, z,, α 0 u β [ T [ x w = 0 u β z where the matrix corresponding to the complementary variables is PSD. So we can find a solution to the LCP by computing a solution of the PGLCP with e T z e T u = 0. We can obviously add this constraint to the PGLCP but this destroys the structure of the PGLCP and the problem becomes NP-hard. A simple alternative way is to introduce the constraint γ 0 = e T z e T u and a column with a parameter λ 0 0 that is complementary to γ 0. This leads into the following GLCP w β = q e + 0 I 0 I 0 e x u + M I 0 z γ e T 0 λ 0 e T (39) α = q + Mz α, z, w, β, γ 0, x, u, λ 0 0 x T w = u T β = λ 0 γ 0 = 0 It is easy to see that the matrix corresponding to the complementary variables I 0 e (40) 0 I 0 0 e T 0 is PSD. Hence the problem reduces to a PGLCP of the form discussed in section 2. It is now important to know when a solution of this PGLCP leads into a solution of the LCP. To do this, let K = {z IR n : q + Mz 0, z 0} be the feasible set of the LCP. Then the following result holds: Theorem 2 If K then the PGLCP has solution ( x, z, ū, λ 0, w, β, γ 0, ᾱ) such that λ0 1. Furthermore ( z, w) is a solution of the LCP provided λ 0 < 1. Proof: Since K, there exists at least a z 0 such that q + M z 0. Let w = ᾱ = q + M z ū = z, γ 0 = e T ū e T z = 0 λ 0 = 0, β = e, x = 0 8

9 Hence ( x, z, ū, λ 0, w, β, γ 0, ᾱ) belongs to the set K of the linear constraints of the PGLCP (39). Since the matrix (40) is PSD, any stationary point of w T x + β T u + λ 0 γ 0 subject to (x, z, u, λ 0, w, β, γ 0, α) K is a solution of the PGLCP [25. As the objective function of this program is bounded from below on K and K such a stationary point exists [4, 35. Hence the PGLCP has at least a solution. Now let ( x, z, ū, λ 0, w, β, γ 0, ᾱ) be a solution of the PGLCP (39). It follows from the definition of this problem that 0 β i + x i = 1 λ 0 Hence λ 0 1 and this proves the first part of the theorem. To prove the second part, consider a solution ( x, z, ū, λ 0, w, β, γ 0, ᾱ) of the PGLCP (39) with λ 0 < 1. Then there are two possible cases: i) x i > 0. Hence w i = 0 and (q + M z) i + (ū i z i ) = 0. Then ᾱ i + ū i z i = 0 which implies ū i z i. ii) x i = 0. Then β i = 1 λ 0 x i = 1 λ 0 > 0. Hence ū i = 0 and w i = (q + M z) i z i. If z i > 0 then e T ū < e T z and γ 0 = e T ū e T z < 0, which is impossible. So z i = 0. We have then shown that Since e T ū e T z, then ū = z and { ūi z i for all i such that x i > 0 ū i = z i = 0 for all i such that x i = 0 w = q + ū + (M I) z = q + M z = ᾱ To prove that z is a solution of the LCP, it is sufficient to show that z i > 0 implies w i = 0. But if z i > 0 then ū i = z i > 0 and β i = 0 = 1 λ 0 x i Since λ 0 < 1, then x i > 0 and w i = 0. This proves the theorem. This theorem enables us to solve the LCP by processing the PGLCP (39). Since the matrix (40) is PSD, a solution to this PGLCP can be found by computing a stationary point of the associated nonlinear program with zero lower bounds discussed in section 2. After finding such a point, there are two possible cases: i) λ 0 < 1 and ( z, w) is a solution of the LCP. ii) λ 0 = 1 and ( z, w) may be a solution of the LCP or not. In the first case, the procedure has found a solution of the LCP. If in the second case ( z, w) is a not solution of the LCP ( z i w i > 0 for at least one i) then another stationary point of the associated nonlinear program has to be computed. The computation of stationary points for the nonlinear program (12) associated to the PGLCP (39) can nowadays be done in a very efficient way, as there exist good algorithms to perform this task [4, 5, 6, 7, 10, 14, 34. The main problem is to get an initial point for the algorithm that leads into a stationary point that is also a solution of the LCP. Recently there has been some work on the design of procedures that try to find a global minimum for an optimization problem by a clever choice of initial points. A heuristic procedure of this type has to be designed in order to fully exploit theorem 1 for the solution of LCPs. 9

10 5 Computational Experience In this section we report some computational experience on a Digital Alpha Server 5/300, running Digital Unix 4.0B, with PGLCPs that arise on the solution of bilinear programs and LCPs by exploiting their reformulations discussed in the two previous sections. These PGLCPs are solved by computing stationary points of merit functions of the form (12). We have chosen the code LANCELOT [7 for such a task, since it is nowadays accepted as a robust code for processing these type of nonlinear programs. In our first experience, we have solved some GLCPs that are the constraint sets of MINPGLCPs associated with Concave Quadratic Programs (CQP), according to the developments described in section 3. We start by describing the test problems P Q10,..., P Q18 that have been used in our experiences. i) P Q10 These are knapsack problems of the form subject to e T x x T x a T x b a T x b 0 x e where e IR n is a vector of ones, a IR n is a vector whose components are random numbers belonging to the interval [1, 50 and b is a positive real number satisfying b = i I a i Here I is a subset of {1,..., n} with cardinal n 4 [P Q10( n 4 ), n 2 [P Q10( n 2 ) and 3n 4 [P Q10( 3n 4 ). These knapsack problems are transformed into MINPGLCPs according to the process explained in section 3. The constraint set of this nonconvex program was the PGLCP to be solved in this experience. ii) P Q11,..., P Q18 These are the CQP test problems described in [13. As before these CQPs are transformed into MINPGLCPs and the PGLCPs to be tested in our experiences are the constraint sets of these nonconvex programs. The results of the solution of the GLCPs by using the merit functions (23) and (24) are displayed in Table 1 under the headings OP T 1 and OP T 2 respectively. In this Table, IT and CP U represent the number of iterations and CP U time taken by LANCELOT to get a stationary point for these functions. Furthermore V ALU EF gives the value of the corresponding merit function at this stationary point. We recall that in theory this value should be equal to zero. By looking to the figures presented in Table 1, we come to the conclusion that LANCELOT is able to find a stationary point for both the merit functions in a reasonable amount of iterations and CP U time. In fact, only three problems require more than 30 iterations to get a stationary point for the merit function (23). Furthermore LANCELOT usually requires more iterations for finding a stationary point to the merit function (24), but the gap is not large. It is, however, interesting CP to note that the ratio Utime IT is usually smaller for this latter function. In our second experience we have considered some LCPs taken from known sources and their equivalent PGLCPs that are obtained according to the process explained in section 4. As before, these PGLCPs are solved by computing stationary points of the merit functions (23) and (24). We start by describing the test problems used in this second experience. P ROB1 This is the LCP discussed in [35, where q IR n is a vector with all components equal to 1 and M is a lower triangular P matrix defined by m ii = 1, i = 1,..., n m ij = 2 for i > j m ij = 0 for i < j 10

11 Table 1: Solution of PGLCPs associated with Concave Quadratic Programs. OP T 1 OP T 2 N IT CP U V ALUEF IT CP U V ALUEF P Q10( n 4 ) e e e e e e e e-11 P Q10( n 2 ) e e e e e e e e-11 P Q10( 3n 4 ) e e e e e e e e-10 P Q e e-14 P Q e e-11 P Q e e-11 P Q e e-13 P Q e e-02 P Q e e-03 P Q e e-12 P Q e e-12 P Q e e-12 P Q e e-13 P Q e e-02 P Q e e-09 11

12 P ROB2 This LCP also appears in [35 and is defined by q i = 1 for all i = 1,..., n and M = L T L, where L is the matrix of P ROB1. Hence M is a symmetric positive definite matrix. P ROB3 This LCP has been introduced by Chadrasekaran, Pang and Stone and is also presented in [35. The vector q also satisfies q i = 1 for all i = 1,..., n and the matrix M is defined as follows: m ii = 1, i = 1,..., n m ij = 2 if j > i and i + j is odd m ij = 1 if j > i and i + j is even m ij = 1 if j < i and i + j is odd m ij = 2 if j < i and i + j is even It is possible to show that M is a positive semi definite matrix. P ROB4 This problem is also discussed in [35 and considers the vector q such that q i = 1 for all i = 1,..., n and M to be the well known Hilbert matrix defined by m ij = 1 i + j 1 for all i, j = 1,..., n. Again M is a positive semi-definite matrix. P ROB5, 6, 7 Consider again the knapsack problem a T z = b, z i {0, 1}, i = 1,..., n where, as before, the components of the vector a are random numbers belonging to the interval [1, 50 and b is the positive real number b = n 2 i=1 The LCPs of P ROB5, 6 and 7 are LCP formulations of this problem that appeared in the literature [26, 35, 36. To get P ROB5, we consider the LCP defined by q = e b, M = I n 0 0 a T α 0 b a T 0 β where I n is the identity matrix of order n and e IR n is a vector of ones. The constants α and β are chosen in order M to be negative semi definite (NSD) or indefinite (IND). For the first case α and β should satisfy α > k at a 4 a i, β > k αat a 4α a T a where k is a real number greater than one. This leads into P ROB5(NSD). On the other hand, M is IND if α and β satisfy α > k at a 4, β < 1 k These choices of α and β lead into P ROB5(IND). q = αa T a 4α a T a P ROB6 is the LCP formulation of the knapsack problem discussed in [36 and is given by a 1.. a n b b, M = I n e e e T 2n 0 e T 0 2n 12

13 where, as before, e is a vector of ones and I n is the identity matrix of order n. As is discussed in [36 the matrix M is symmetric negative semi definite. Finally P ROB7 is the LCP formulation discussed in [26 and considers p B p 0 B q =., M = p 0 0 B 0 0 b ā T b ā T where p = , B = , ā = (ā 1,..., ā 4n+2 ) T IR 4n+2 with { ai if i = 4j 3, j = 1,..., n ā i = 0 otherwise As dicussed in [26 the matrix M has nonnegative principal minors (that is, M P 0 ), but it is not positive semi definite. P ROB8, 9 These are structured LCPs that are formulations of nonzero sum bimatrix games [8, 35. The LCP of P ROB8 takes the form [ [ e m 0 A q = e r, M = B 0 where e j is a vector of ones of order j and A, B are positive matrices whose elements are random numbers belonging to the interval [1, 50. On the other hand the vector q and the matrix M of the LCP corresponding to P ROB9 are given by q = [ e m e r, M = [ 0 A B 0 For each one of these LCPs, we have generated four problems differing on its dimension n. The results of the solution of these LCP test problems by processing their equivalent PGLCPs are displayed in Table 2. We recall that the LCP is equivalent to a PGLCP with a parameter λ 0. If λ 0 < 1 in a solution of this PGLCP, then a solution of the LCP is at hand. Otherwise (λ 0 = 1) no conclusion can be drawn about the existence of a solution to the LCP, but usually the solution of the PGLCP does not lead to a solution of the LCP. As before, the PGLCP is solved by computing the stationary point of the associated merit functions (23) or (24). The computational effort for performing such a task is displayed under the headings OP T 1 and OP T 2 respectively, by stating the number of iterations (IT ) and the CP U time (CP U) that LANCELOT has required to get a stationary point in each one of the cases. In this Table, V ALUEF continues to represent the value of the merit function at this stationary point and LAMBDA gives the value of the variable λ 0 at this solution. So LAMBDA < 1 means that a solution of the LCP has been found. Furthermore we write an asterisk when a solution of the LCP has been found in the case of LAMBDA = 1. The results displayed in Table 2 lead to conclusions similar to the case of CQPs about the ability of LANCELOT to get stationary points that are solutions of the PGLCP in a small amount of effort. As before, the natural function (23) seems to be a better choice in terms of the number of iterations that LANCELOT requires to get a stationary point that solves the PGLCP. The results 13

14 Table 2: Solution of PGLCPs associated with LCPs. OP T 1 OP T 2 N IT CP U V ALUEF LAMBDA IT CP U V ALUEF LAMBDA P ROB e e e e e e e e e e e e+00* e e e e-01 P ROB e e e e e e+00* e e+00* e e+00* e e+00* e e e e+00* P ROB e e e e e e e e e e e e e e e e-01 P ROB e e e e e e e e e e e e e e e e+00 P ROB e e e e+00 (NSD) e e e e e e e e e e e e+00 P ROB e e e e+00 (IND) e e e e e e e e e e e e+00 P ROB e e e e e e e e e e e e e e e e+00 P ROB e e e e e e e e e e e e e e e e+00 P ROB e e e e e e+00* e e+00* e e e e e e e e-01 P ROB e e+00* e e+00* e e e e+00* e e e e e e e e+00* 14

15 also show that in general the solution of the PGLCP is not a solution of the LCP. However, we have only presented the numerical results that have been achieved by using LANCELOT with its recommended starting point. A heuristic procedure to get a good initial point for the local solver (LANCELOT or other) that leads into a stationary point of the merit function that is also a solution of the LCP will certainly be an important topic for future research. This will enable solving NP hard LCPs and bilinear programs by nonenumerative techniques. 6 Conclusions In this paper we have showed that a solution of a polynomial General Linear Complementarity Problem (PGLCP) can be found by computing a stationary point of an appropriate merit function. This result has important implications on the solution of bilinear and concave quadratic programs and zero one integer programming problems. We have also showed that any Linear Complementarity Problem (LCP) can be reduced into a PGLCP. Hence, under certain conditions, a solution of the LCP can be found by computing a stationary point of an appropriate merit function. Some computational experience with concave quadratic programs, knapsack problems and LCPs was included and showed the appropriateness of solving the associated PGLCPs by computing stationary points of the corresponding merit functions. We believe that these conclusions will have an important effect on the solution of these difficult nonconvex problems, particularly if we can design heuristic procedures capable of providing good starting points for the local search techniques that are employed to solve the PGLCP. This is a topic that deserves research in the future. References [1 F. Al-Khayyal, An implicit enumeration procedure for solving all linear complementarity problems, Mathematical Programming Study, 31 (1987), pp [2, On solving linear complementarity problems as bilinear programs, Arabian Journal for Science and Engineering, 15 (1990), pp [3 F. Al-Khayyal and J. Falk, Jointly constrained biconvex programming, Mathematics of Operations Research, 8 (1983), pp [4 D. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont, [5 T. Coleman and Y. Li, On the convergence of reflective Newton methods for large scale nonlinear minimization subject to bounds, Mathematical Programming, 67 (1994), pp [6, An interior trust region approach for nonlinear minimization subject to bounds, SIAM Journal on Optimization, 6 (1996), pp [7 A. Conn, N. Gould, and P. Toint, LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization, Springer-Verlag, Berlin, [8 R. Cottle, J. Pang, and R. Stone, The Linear Complementarity Problem, Academic Press, New York, [9 T. de Luca, F. Facchinei, and C. Kanzow, A semismooth equation approach to the solution of nonlinear complementarity problems, Mathematical Programming, 75 (1996), pp [10 F. Facchinei, J. Júdice, and J. Soares, An active set Newton algorithm for large scale nonlinear programs with box constraints. To appear in SIAM Journal on Optimization,

16 [11 F. Facchinei and J. Soares, A new merit function for nonlinear complementarity problems and a related algorithm, SIAM Journal on Optimization, 7 (1997), pp [12 A. Fischer, An NCP function and its use for the solution of complementarity problems, in Recent Advances of Nonsmooth Optimization, D. Du, L. Qi, and R. Womersley, eds., World Scientific Publishers, Singapore, 1995, pp [13 C. Floudas and P.Pardalos, A Collection of Test Problems for Constrained Global Optimization Algorithms, Lecture Notes in Computer Science 455, Springer-Verlag, Berlin, [14 A. Friedlander, J. Martinez, and S. Santos, A new trust-region algorithm for bound constrained optimization, Applied Mathematics and Optimization, 30 (1994), pp [15, Solution of linear complementarity problems using minimization with simple bounds, Journal of Global Optimization, 6 (1995), pp [16 M. Fukushima, J. Pang, and Z. Luo, A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints. To appear in Computational Optimization and Applications, [17 F. Giannessi and F. Niccolucci, Connections between nonlinear and integer programming problems, Symposia Mathematica, 19 (1976), pp [18 R. Horst, P. Pardalos, and N. Thoai, Introdution to Global Optimization, Kluwer Academic Publishers, Boston, [19 R. Horst and H. Tuy, Global Optimization: Deterministic Approaches, Springer-Verlag, Berlin, [20 J. Júdice and A. Faustino, An experimental investigation of enumerative methods for the linear complementarity problem, Computers and Operations Research, 15 (1988), pp [21 J. Júdice and A. Faustino, A computational analysis of LCP methods for bilinear and concave quadratic programming, Computers and Operations Research, 18 (1991), pp [22 J. Júdice and A. Faustino, A sequential LCP algorithm for bilevel linear programming, Annals of Operations Research, 34 (1992), pp [23, The linear-quadratic bilevel programming problem, Journal of Information Systems and Operational Research, 32 (1994), pp [24 J. Júdice and G. Mitra, Reformulations of mathematical programming problems as linear complementarity problems and an investigation of their solution methods, Journal of Optimization Theory and Applications, 57 (1988), pp [25 J. Júdice and L. Vicente, On the solution and complexity of a generalized linear complementarity problem, Journal of Global Optimization, 4 (1994), pp [26 M. Kojima, N. Megiddo, T. Noma, and A. Yoshise, A Unified Approach to Interior- Point Algorithms for Linear Complementarity Problems, Lecture Notes in Computer Science 538, Springer-Verlag, Berlin, [27 H. Konno, Bilinear programming - part 2 - applications of bilinear programming, Technical Report 71-10, Department of Operations Research, Stanford University, Stanford, USA, [28, Maximization of a convex quadratic function under linear constraints, Mathematical Programming, 11 (1976), pp

17 [29, A cutting-plane algorithm for solving bilinear programs, Mathematical Programming, 12 (1977), pp [30 Z. Luo, J. Pang, and D. Ralph, Mathematical Programs with Equilibrium Constraints, Academic Press, Cambridge University Press, New York, [31 O. Mangasarian, Equivalence of the complementarity problem to a system of nonlinear equations, SIAM Journal on Applied Mathematics, 31 (1976), pp [32, Misclassification minimization, Journal of Global Optimization, 5 (1994), pp [33, The linear complementarity problem as a separable bilinear program, Journal of Global Optimization, 61 (1995), pp [34 J. Moré, Global methods for nonlinear complementarity problems, technical Report, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, USA, [35 K. Murty, Linear Complementarity, Linear and Nonlinear Programming, Heldermann Verlag, Berlin, [36 K. Murty and J. Júdice, On the complexity of finding stationary points of nonconvex quadratic programs, Opsearch, 33 (1996), pp [37 P. Pardalos and J. Rosen, Constrained Global Optimization: Algorithms and Applications, Lecture Notes in Computer Science 268, Springer-Verlag, Berlin, [38 H. Sherali and A. Alameddine, A new reformulation-linearization technique for solving bilinear programming problems, Journal of Global Optimization, 2 (1992), pp [39 H. Sherali, R. Krishnamurthy, and F. Al-Khayyal, An enhanced intersection cutting plane approach for linear complementarity problems. To appear in Journal of Optimization Theory and Applications, [40 E. Simantiraki and D. Shanno, An infeasible interior point method for linear complementarity problems, SIAM Journal on Optimization, 7 (1997), pp [41 P. Tseng, N. Yamashita, and M. Fukushima, Equilavence of linear complementarity problems to differentiable minimization: a unified approach, SIAM Journal on Optimization, 6 (1996), pp [42 S. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, [43 Y. Ye, A fully polynomial-time approximation algorithm for computing a stationary point of the general linear complementarity problem, Mathematics of Operations Research, 18 (1993), pp

Algorithms for Linear Programming with Linear Complementarity Constraints

Algorithms for Linear Programming with Linear Complementarity Constraints Algorithms for Linear Programming with Linear Complementarity Constraints Joaquim J. Júdice E-Mail: joaquim.judice@co.it.pt June 8, 2011 Abstract Linear programming with linear complementarity constraints

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

On the solution of the monotone and nonmonotone Linear Complementarity Problem by an infeasible interior-point algorithm

On the solution of the monotone and nonmonotone Linear Complementarity Problem by an infeasible interior-point algorithm On the solution of the monotone and nonmonotone Linear Complementarity Problem by an infeasible interior-point algorithm J. Júdice L. Fernandes A. Lima Abstract The use of an Infeasible Interior-Point

More information

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0.

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0. BOOK REVIEWS 169 BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 28, Number 1, January 1993 1993 American Mathematical Society 0273-0979/93 $1.00+ $.25 per page The linear complementarity

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

An investigation of feasible descent algorithms for estimating the condition number of a matrix

An investigation of feasible descent algorithms for estimating the condition number of a matrix Top (2012) 20:791 809 DOI 10.1007/s11750-010-0161-9 ORIGINAL PAPER An investigation of feasible descent algorithms for estimating the condition number of a matrix Carmo P. Brás William W. Hager Joaquim

More information

On the Asymmetric Eigenvalue Complementarity Problem

On the Asymmetric Eigenvalue Complementarity Problem On the Asymmetric Eigenvalue Complementarity Problem Joaquim J. Júdice E-Mail: joaquim.judice@co.it.pt Hanif D. Sherali E-Mail: hanifs@vt.edu Isabel M. Ribeiro E-mail: iribeiro@fe.up.pt Silvério S. Rosa

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques Math. Program. 86: 9 03 (999) Springer-Verlag 999 Digital Object Identifier (DOI) 0.007/s007990056a Song Xu James V. Burke A polynomial time interior point path following algorithm for LCP based on Chen

More information

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming Optimization Letters 1 10 (009) c 009 Springer. Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences Department

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Cost Minimization of a Multiple Section Power Cable Supplying Several Remote Telecom Equipment

Cost Minimization of a Multiple Section Power Cable Supplying Several Remote Telecom Equipment Cost Minimization of a Multiple Section Power Cable Supplying Several Remote Telecom Equipment Joaquim J Júdice Victor Anunciada Carlos P Baptista Abstract An optimization problem is described, that arises

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and

More information

Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008

Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008 Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP September 10, 2008 Geometry of the Complementarity Problem Definition 1 The set pos(a) generated by A R m p represents

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE Richard W. Cottle Department of Management Science and Engineering Stanford University ICCP 2014 Humboldt University Berlin August, 2014 1 / 55 The year

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

SOLUTIONS AND OPTIMALITY CRITERIA TO BOX CONSTRAINED NONCONVEX MINIMIZATION PROBLEMS. David Yang Gao. (Communicated by K.L. Teo)

SOLUTIONS AND OPTIMALITY CRITERIA TO BOX CONSTRAINED NONCONVEX MINIMIZATION PROBLEMS. David Yang Gao. (Communicated by K.L. Teo) JOURNAL OF INDUSTRIAL AND Website: http://aimsciences.org MANAGEMENT OPTIMIZATION Volume 3, Number 2, May 2007 pp. 293 304 SOLUTIONS AND OPTIMALITY CRITERIA TO BOX CONSTRAINED NONCONVEX MINIMIZATION PROBLEMS

More information

An approach to constrained global optimization based on exact penalty functions

An approach to constrained global optimization based on exact penalty functions DOI 10.1007/s10898-010-9582-0 An approach to constrained global optimization based on exact penalty functions G. Di Pillo S. Lucidi F. Rinaldi Received: 22 June 2010 / Accepted: 29 June 2010 Springer Science+Business

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Linear Complementarity as Absolute Value Equation Solution

Linear Complementarity as Absolute Value Equation Solution Linear Complementarity as Absolute Value Equation Solution Olvi L. Mangasarian Abstract We consider the linear complementarity problem (LCP): Mz + q 0, z 0, z (Mz + q) = 0 as an absolute value equation

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

An EP theorem for dual linear complementarity problems

An EP theorem for dual linear complementarity problems An EP theorem for dual linear complementarity problems Tibor Illés, Marianna Nagy and Tamás Terlaky Abstract The linear complementarity problem (LCP ) belongs to the class of NP-complete problems. Therefore

More information

A projection algorithm for strictly monotone linear complementarity problems.

A projection algorithm for strictly monotone linear complementarity problems. A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu

More information

The Second-Order Cone Quadratic Eigenvalue Complementarity Problem. Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D.

The Second-Order Cone Quadratic Eigenvalue Complementarity Problem. Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D. The Second-Order Cone Quadratic Eigenvalue Complementarity Problem Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D. Sherali Abstract: We investigate the solution of the Second-Order Cone

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality

A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Yinyu Ye April 23, 2005 Abstract: We extend the analysis of [26] to handling more general utility functions:

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au

More information

Exact Penalty Functions for Nonlinear Integer Programming Problems

Exact Penalty Functions for Nonlinear Integer Programming Problems Exact Penalty Functions for Nonlinear Integer Programming Problems S. Lucidi, F. Rinaldi Dipartimento di Informatica e Sistemistica Sapienza Università di Roma Via Ariosto, 25-00185 Roma - Italy e-mail:

More information

CANONICAL DUAL APPROACH TO SOLVING 0-1 QUADRATIC PROGRAMMING PROBLEMS. Shu-Cherng Fang. David Yang Gao. Ruey-Lin Sheu. Soon-Yi Wu

CANONICAL DUAL APPROACH TO SOLVING 0-1 QUADRATIC PROGRAMMING PROBLEMS. Shu-Cherng Fang. David Yang Gao. Ruey-Lin Sheu. Soon-Yi Wu JOURNAL OF INDUSTRIAL AND Website: http://aimsciences.org MANAGEMENT OPTIMIZATION Volume 4, Number 1, February 2008 pp. 125 142 CANONICAL DUAL APPROACH TO SOLVING 0-1 QUADRATIC PROGRAMMING PROBLEMS Shu-Cherng

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Technische Universität Ilmenau Institut für Mathematik

Technische Universität Ilmenau Institut für Mathematik Technische Universität Ilmenau Institut für Mathematik Preprint No. M 14/05 Copositivity tests based on the linear complementarity problem Carmo Bras, Gabriele Eichfelder and Joaquim Judice 28. August

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1

Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1 Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1 Xiaojun Chen and Masao Fukushima 3 January 13, 004; Revised November 5, 004 Abstract. This paper presents a new formulation

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Formulating an n-person noncooperative game as a tensor complementarity problem

Formulating an n-person noncooperative game as a tensor complementarity problem arxiv:60203280v [mathoc] 0 Feb 206 Formulating an n-person noncooperative game as a tensor complementarity problem Zheng-Hai Huang Liqun Qi February 0, 206 Abstract In this paper, we consider a class of

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem Journal of Applied Mathematics Volume 2012, Article ID 219478, 15 pages doi:10.1155/2012/219478 Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity

More information

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195

More information

Solving a Signalized Traffic Intersection Problem with NLP Solvers

Solving a Signalized Traffic Intersection Problem with NLP Solvers Solving a Signalized Traffic Intersection Problem with NLP Solvers Teófilo Miguel M. Melo, João Luís H. Matias, M. Teresa T. Monteiro CIICESI, School of Technology and Management of Felgueiras, Polytechnic

More information

Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality

Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Yinyu Ye April 23, 2005; revised August 5, 2006 Abstract This paper studies the equilibrium property and algorithmic

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

Priority Programme 1962

Priority Programme 1962 Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams John E. Mitchell, Jong-Shi Pang, and Bin Yu Original: June 10, 2011 Abstract Nonconvex quadratic constraints can be

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

School of Mathematics, the University of New South Wales, Sydney 2052, Australia

School of Mathematics, the University of New South Wales, Sydney 2052, Australia Computational Optimization and Applications, 13, 201 220 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On NCP-Functions * DEFENG SUN LIQUN QI School of Mathematics,

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that SIAM J. CONTROL OPTIM. Vol. 37, No. 3, pp. 765 776 c 1999 Society for Industrial and Applied Mathematics A NEW PROJECTION METHOD FOR VARIATIONAL INEQUALITY PROBLEMS M. V. SOLODOV AND B. F. SVAITER Abstract.

More information

16.1 L.P. Duality Applied to the Minimax Theorem

16.1 L.P. Duality Applied to the Minimax Theorem CS787: Advanced Algorithms Scribe: David Malec and Xiaoyong Chai Lecturer: Shuchi Chawla Topic: Minimax Theorem and Semi-Definite Programming Date: October 22 2007 In this lecture, we first conclude our

More information

arxiv:math/ v1 [math.co] 23 May 2000

arxiv:math/ v1 [math.co] 23 May 2000 Some Fundamental Properties of Successive Convex Relaxation Methods on LCP and Related Problems arxiv:math/0005229v1 [math.co] 23 May 2000 Masakazu Kojima Department of Mathematical and Computing Sciences

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Convex Expected Residual Models for Stochastic Affine Variational Inequality Problems and Its Application to the Traffic Equilibrium Problem

Convex Expected Residual Models for Stochastic Affine Variational Inequality Problems and Its Application to the Traffic Equilibrium Problem Convex Expected Residual Models for Stochastic Affine Variational Inequality Problems and Its Application to the Traffic Equilibrium Problem Rhoda P. Agdeppa, Nobuo Yamashita and Masao Fukushima Abstract.

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

1. Introduction. We consider the mathematical programming problem

1. Introduction. We consider the mathematical programming problem SIAM J. OPTIM. Vol. 15, No. 1, pp. 210 228 c 2004 Society for Industrial and Applied Mathematics NEWTON-TYPE METHODS FOR OPTIMIZATION PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS A. F. IZMAILOV AND M. V.

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Equilibrium Programming

Equilibrium Programming International Conference on Continuous Optimization Summer School, 1 August 2004 Rensselaer Polytechnic Institute Tutorial on Equilibrium Programming Danny Ralph Judge Institute of Management, Cambridge

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 0 (207), 4822 4833 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa An accelerated Newton method

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

1.The anisotropic plate model

1.The anisotropic plate model Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 6 OPTIMAL CONTROL OF PIEZOELECTRIC ANISOTROPIC PLATES ISABEL NARRA FIGUEIREDO AND GEORG STADLER ABSTRACT: This paper

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information