A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints

Size: px
Start display at page:

Download "A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints"

Transcription

1 A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints Hiroshi Yamamura, Taayui Ouno, Shunsue Hayashi and Masao Fuushima June 14, 212 Abstract In this paper, we focus on the mathematical program with second-order cone (SOC complementarity constraints, which contains the well-nown mathematical program with nonnegative complementarity constraints as a subclass. For solving such a problem, we propose a smoothing-based sequential quadratic programming (SQP methods. We first replace the SOC complementarity constraints with equality constraints using the smoothing natural residual function, and apply the SQP method to the smoothed problem with decreasing the smoothing parameter. We show that the proposed algorithm possesses the global convergence property under the Cartesian P property and the nondegeneracy assumptions. We finally observe the effectiveness of the algorithm by means of numerical experiments. Key words: mathematical programs with equilibrium constraints; sequential quadratic programming; second-order cone complementarity Mathematics Subject Classification: 65K5, 9C3, 9C33 1 Introduction In this paper, we focus on the following mathematical program with second-order cone (SOC complementarity constraints, abbreviated as MPSOCC: Minimize f(x, y x,y,z subject to Ax b, z = Nx + My + q, K y z K, (1.1 where f : R n+m R is a continuously differentiable function, A R p n, b R p, N R m n, M R m m and q R m are given matrices and vectors, denotes the perpendicularity, and K is the Cartesian product of second-order cones, that is, K := K m 1 K m 2 K m l R m 1 R m 2 R m l = R m with { { } u = (u1, u K mi = 2 R R mi 1 u 2 u 1 (mi 2, R + = {u R u } (m i = 1. Throughout the paper, we suppose that m i 2 for each i. Mathematical program with equilibrium constraints (MPEC [15] has been studied extensively, since it finds wide application such as design problems in engineering, the equilibrium problems in economics, and gametheoretic multi-level optimization problems. Particularly, equilibrium constraints in MPECs are often written as linear or nonlinear complementarity constraints. Such an MPEC is also called a mathematical program with complementarity constraints (MPCC. When m i = 1 for all i, i.e., K = R m +, MPSOCC (1.1 reduces to the MPCC, for which there have been proposed many algorithms. For example, Fuushima and Tseng [12] proposed an active set algorithm, and proved that any accumulation point of the sequence generated by the algorithm Technical report 212-5, July 14, 212. Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto , Japan (yamamura, {t_ouno, shunhaya, fuu}@amp.i.yoto-u.ac.jp. 1

2 is a B-stationary point under the uniform linear inequality constraint qualification (LICQ on the ε-feasible set. Luo, Pang, and Ralph [15] proposed a piece-wise sequential quadratic programming (SQP algorithm, and showed that the generated sequence converges to a B-stationary point locally superlinearly or quadratically under the LICQ and the second order sufficient conditions. Fuushima, Luo, and Pang [1] proposed an SQPtype algorithm, and showed that the sequence generated by the algorithm globally converges to a B-stationary point under the nondegeneracy condition at the limit point. Problems with SOC constraints also attract much attention of many researchers. One of the typical problems is the second-order cone program (SOCP [1]. The SOCP has a lot of applications such as the antenna array weight design, the finite response impulse (FIR filter design, the portfolio optimization, and the magnetic shield design optimization. Moreover, SOCP includes many classes of problems such as linear program (LP, convex quadratic program (QP, etc. The second-order cone complementarity problem (SOCCP [5, 7, 11] is another type of problems involving SOC constraints. Fuushima, Luo and Tseng [11] studied smoothing functions for the Fischer-Burmeister function and the natural residual function with respect to the SOC complementarity condition. Using those smoothing functions, Hayashi, Yamashita and Fuushima [14] proposed a globally and quadratically convergent algorithm based on the smoothing and regularization methods. As an application of the SOCCP, Nishimura, Hayashi and Fuushima [16] studied the SOCCP reformulation of the robust Nash equilibrium problem in an N-person non-cooperative game. As mentioned in the last two paragraphs, there have been many researches on the MPECs with nonnegative complementarity constraints and the optimization/complementarity problems with SOC constraints. However, there are only a few studies on MPECs with SOC complementarity constraints. For example, Yan and Fuushima [2] proposed a smoothing method for solving such problems. To show convergence of the algorithm, they assume that smoothed subproblems are solved exactly. However, it can hardly be expected in practice. To overcome such a difficulty, we propose to combine an SQP-type method with the smoothing method. The proposed method replaces the SOC complementarity condition of MPSOCC (1.1 with a certain vector equation by using a smoothed natural residual function, thereby yielding convex quadratic programming subproblems which can be solved efficiently by any state-of-the-art method such as the active method and interior point method. Although our method may be viewed as an extension of the SQP method in [1], the convergence analysis is quite different since it exploits the particular properties of the natural residual associated with the SOC complementarity condition. This paper is organized as follows. In Section 2, we give some preliminaries. In Section 3, we reformulate MPSOCC (1.1 as a nonlinear programming problem by replacing the second-order cone complementarity constraints by equivalent nonsmooth equality constraints. In Section 4, we introduce a smoothing technique to deal with the nonsmooth constraints. In Section 5, we propose an SQP-type algorithm for solving problem (1.1 and show that the proposed method is well-defined under the Cartesian P property. In Section 6, we show that the proposed algorithm possesses the global convergence property under the nondegeneracy assumptions. In Section 7, we give some numerical examples. In Section 8, we conclude the paper with some remars. Throughout the paper, we use the following notations. For a given vector z R m, z i denotes the i-th element of z R m, while z i R m i denotes the i-th column subvector conforming to the given Cartesian structure of K. For subvectors z 1 R m1, z 2 R m2,..., z l R m l, we often let (z 1, z 2,..., z l denote the column vector ((z 1,, (z l R m1+ +m l. For a vector z R m, we denote z 1 := m i=1 z i, z := z z and z := max 1 i m z i. For a matrix M R m m, M denotes the operator norm defined by M = max x =1 Mx. For matrices M, N R m m, M ( N denotes that M N is a positive (semi-definite matrix. We denote the interior and the boundary of K by int K and bd K, respectively. *1 For vectors y, z R m, y K z and y K z mean y z K and y z int K, respectively. We denote the nonnegative cone in R m and its interior by R m + := {z R m z i (i = 1, 2,..., m} and R m ++ := {z R m z i > (i = 1, 2,..., m}, respectively. Finally, for a set C R m and a vector z C, we denote the normal cone [18] of C at z by N C (z. 2 Preliminaries 2.1 Spectral factorization and natural residual The proposed algorithm relies on the fact that the SOC complementarity condition K y z K can be *1 Note that int K = {(z 1, z 2 R R m 1 z 1 > z 2 } and bd K m = {(z 1, z 2 R R m 1 z 1 = z 2 }. In addition, for K = K m 1 K m l, we have int K = int K m 1 int K m l and bd K = K \ int K. 2

3 rewritten as a system of equations by means of the natural residual. To be specific, we first recall the spectral factorization of a vector with respect to the SOC, K m. Definition 2.1. For any vector z := (z 1, z 2 R R m 1, we define the spectral factorization with respect to K m as z = λ 1 c 1 + λ 2 c 2, where λ 1 and λ 2 are the spectral values given by and c 1 and c 2 are the spectral vectors given by 1 c j = 2 λ j = z 1 + ( 1 j z 2, j = 1, 2, ( 1, ( 1 j z 2 z 2 if z (1, ( 1j v if z 2 = respectively, where v R m 1 is an arbitrary vector such that v = 1. j = 1, 2, By using the spectral factorization, we can write the Euclidean projection onto K m explicitly as follows [11]: P K m(z := argmin z K m z z = max{, λ 1 }c 1 + max{, λ 2 }c 2, where λ j and c j (j = 1, 2 are the spectral values and the spectral vectors of z, respectively. Now, let us define the natural residual for the SOC complementarity condition by using the Euclidean projection. Definition 2.2. Let y := (y 1, y 2,, y l and z := (z 1, z 2,, z l R m1 R m2 R m l = R m be arbitrary vectors. Then, the natural residual function Φ : R m R m R m with respect to K = K m 1 K m 2 K m l is defined as where It can be shown [11] that Φ(y, z := y P K (y z (2.1 ϕ 1 (y 1, z 1 =. ϕ l (y l, z l R m 1 R m l, ϕ i (y i, z i = y i P K m i (y i z i, i = 1, 2,..., l. ϕ i (y i, z i = K mi y i z i K mi, Φ(y, z = K y z K. The following proposition states the property of function Φ(y, z when y z / bd (K K. Proposition 2.1. Let y, z R m be chosen so that y z / bd (K K. Then, the function Φ : R m R m R m defined by (2.1 is continuously differentiable at (y, z, and the following equality holds: where I m R m m denotes the identity matrix. y Φ(y, z + z Φ(y, z = I m, Proof. It suffices to consider the case where K = K m. Let λ 1, λ 2 R be the spectral values of y z defined as in Definition 2.1. Note that, from y z / bd (K m K m, we have λ 1, λ 2. Then, from [14, Proposition 4.8], the Clare subdifferential P K m(y z is explicitly given as I m (λ 1 >, λ 2 >, λ 2 P K m(y z = I m + W (λ 1 <, λ 2 >, λ 2 λ 1 O (λ 1 <, λ 2 <, 3

4 where W := 1 ( r1 r2 2 r 2 r 1 r 2 r2, (r 1, r 2 := (y 1 z 1, y 2 z 2. y 2 z 2 Thus P K m is differentiable at y z. This fact readily implies the continuous differentiability of Φ at (y, z since Φ(y, z = y P K m(y z. We next show the second half of the proposition. By an easy calculation, we have y Φ(y, z = I m P K m(y z. Similarly, we have z Φ(y, z = P K m(y z. Hence we obtain the desired equality. 2.2 Cartesian P and P matrices In this subsection, we introduce the Cartesian P and the Cartesian P matrices. The concept of the Cartesian P (P matrix is a natural extension of the well-nown P (P matrix [8]. Although the Cartesian P (P matrix can be defined not only for the SOC but also for the semidefinite cone [6] and the symmetric cone [13], we restrict ourselves to the case of the SOCs. Definition 2.3. Suppose that the Cartesian structure of K R m is given as K := K m 1 K m 2 K m l. Then, M R m m is called (a a Cartesian P matrix if, for every nonzero z = (z 1,..., z l R m = R m1 R m l, there exists an index i {1,..., l} such that (z i (Mz i ; (b a Cartesian P matrix if, for every nonzero z = (z 1,..., z l R m = R m 1 R m l, there exists an index i {1,..., l} such that (z i (Mz i >. Here, (Mz i R mi denotes the i-th subvector of Mz R m conforming to the Cartesian structure of K. Notice that the definition of the Cartesian P (P property depends on the Cartesian structure of K. In what follows, we assume that the Cartesian structure of K is always given as K = K m 1 K m 2 K m l. The definition of the classical P (P matrix corresponds to the case where K = R m +. It is easily seen that every Cartesian P (P matrix is a P (P matrix [17]. The following proposition implies that the Cartesian P (P property is preserved under a nonsingular blocdiagonal transformation. Proposition 2.2. Let M R m m be any matrix, and H i R m i m i (i = 1, 2,..., l be arbitrary nonsingular matrices. Let the matrix M R m m be defined by M :=H MH, H := H 1... H l. Then, the following statements hold. (a If M is a Cartesian P matrix, then M is a Cartesian P matrix. (b If M is a Cartesian P matrix, then M is a Cartesian P matrix. Proof. We first show (b. Let z = (z 1,..., z l R m = R m1 R m l be an arbitrary nonzero vector. We 4

5 show that there exists an i {1, 2,..., l} such that (z i (M z i >. Note that H 1 H (M z i = 1... M... z H l H l i H 1 H = 1 z 1... M. H l H l z l i l H1 M 1 H z =1 =. l M l H z = ( H l H i =1 l M i H z. where ( i and ( i denote the i-th subvector and the (i, -th bloc entry, respectively, conforming to the Cartesian structure of K. Hence, we have =1 i (z i (M z i = (z i H i l M i H z = ( H i z i l =1 =1 M i H z = ((Hz i (MHz i. Since M is a Cartesian P matrix and Hz from the nonsingularity of H, we have (z i (M z i = ((Hz i (MHz i > for some i. Hence, M is a Cartesian P matrix. We omit the proof of (a since it can be shown in a similar manner to (b. 3 Reformulation of MPSOCC and relationship between the KKT conditions and B-stationary points In the previous section, we introduce the natural residual function Φ and observed that second-order cone complementarity condition K y z K can be represented as Φ(y, z = equivalently. In this section, we rewrite MPSOCC (1.1 as the following problem where the SOC complementarity constraint is replaced by the equivalent equality constraint involving the natural residual function Φ: Minimize f(x, y x,y,z subject to Ax b, z = Nx + My + q, Φ(y, z =. (3.1 We also call this problem MPSOCC. MPSOCC (3.1 is a nonsmooth optimization problem since Φ is not differentiable everywhere. However, as is claimed by the next proposition, Φ is continuously differentiable at any (y, z satisfying the following nondegeneracy condition: Definition 3.1 (Nondegeneracy. Suppose that (y, z R m R m satisfies the SOC complementarity condition K y z K. Moreover, decompose y and z as y = (y 1, y 2,..., y l and z = (z 1, z 2,..., z l R m 1 R m 2 R m l = R m conforming to the Cartesian structure of K. Then, (y, z is said to be nondegenerate if, for every i = 1, 2,..., l, one of the following three conditions holds: (i y i int K m i, z i = ; (ii y i =, z i int K m i ; (iii y i bd K m i \ {}, z i bd K m i \ {}, (y i z i =. Proposition 3.1. Let (y, z R m R m satisfy the nondegeneracy condition. Then, Φ is continuously differentiable at (y, z. 5

6 Proof. The nondegeneracy condition readily yields y z / bd (K K. Hence, Φ is continuously differentiable at (y, z by Proposition 2.1. Now, let X := {x R n Ax b}, and let w := (x, y, z R n R m R m be a feasible point of MPSOCC (1.1 that satisfies the nondegeneracy condition. By Proposition 3.1, MPSOCC (3.1 can be viewed as a smooth optimization problem in a small neighborhood of w. Then, the Karush-Kuhn-Tucer (KKT conditions on w are represented as x f(x, y N T y f(x, y + M T u + y Φ(y, z v N X (x {} 2m, I z Φ(y, z Φ(y, z =, Ax b, z = Nx + My + q, (3.2 where {} 2m := {} {} {} R 2m, and u R l, v R m and η R m are Lagrange multipliers. We next consider the stationarity of MPSOCC (1.1 or (3.1. So far, several inds of stationary points have been studied in the literature of MPECs, e.g., see [19]. Among them, a Bouligand- or B-stationary point is the most desirable, since it is directly related to the first order optimality condition. Specifically, a B-stationary point for MPSOCC (1.1 is defined as follows: Definition 3.2 (B-stationarity. Let F R n+2m denote the feasible set of MPSOCC (1.1. We say that w := (x, y, z F is a B-stationary point of MPSOCC (1.1 if ( f(x, y, N F (w holds. In what follows, we show that a point satisfying KKT conditions (3.2 is a B-stationary point. purpose, we give three useful lemmas. For this Lemma 3.1. [18, Proposition 6.41] Let C := C 1 C 2 C s for nonempty closed sets C i R n i. Choose ζ := (ζ 1, ζ 2,..., ζ s C 1 C 2 C s. Then, we have N C (ζ = N C1 (ζ 1 N C2 (ζ 2 N Cs (ζ s. Lemma 3.2. [18, Chapter 6-C] For a continuously differentiable function F : R p R q, let D := {ζ R p F (ζ = }. Choose ζ D arbitrarily. If F (ζ has full column ran, then we have N D (ζ = F (ζr q := {ζ R p ζ = F (ζv, v R q }. Lemma 3.3. [18, Theorem 6.14] For a continuously differentiable function F : R p R q and a closed set C R p, let D := {ζ C F (ζ = }. Choose ζ D arbitrarily. Then we have N D (ζ F (ζr q + N C (ζ. Now, we show that the KKT conditions (3.2 are sufficient conditions for w = (x, y, z to be B-stationary. Proposition 3.2. Let w := (x, y, z be a feasible point of MPSOCC (3.1. Suppose that the nondegeneracy condition holds at (y, z. If w satisfies the KKT conditions (3.2, then w is a B-stationary point of MPSOCC (1.1. Proof. Let X := {x R n Ax b} and Y := {(y, z R m R m Φ(y, z = }. We first note that the nondegeneracy at (y, z implies the continuous differentiability of Φ at (y, z from Proposition 2.1. Choose v R m such that Φ(y, zv =. From Proposition 2.1, we then have y Φ(y, zv = and z Φ(y, zv = (I y Φ(y, zv =, which readily imply v =, and thus Φ(y, z has full column ran. Therefore, by Lemma 3.2 with p = q := m, D := Y and F := Φ, we have Then, it holds that N Y (y, z = Φ(y, zr m. (3.3 N X Y (w = N X (x N Y (y, z = N X (x Φ(y, zr m, (3.4 where the first equality follows from Lemma 3.1 and the second equality follows from (3.3. Now, let F R n+2m denote the feasible set of MPSOCC (3.1, i.e., F = {(x, y, z X Y Nx + My z + q = }. Then, from Lemma 3.3, we have N F (w ( N, M, I R m + N X Y (w. (3.5 Combining (3.4 with (3.5, we obtain N F (w ( N, M, I R m + N X (x Φ(y, zr m, which together with the KKT conditions (3.2 implies ( f(x, y, N F (w. point. Thus, w is a B-stationary 6

7 4 Smoothing function of natural residual The natural residual function Φ given in Definition 2.2 is not differentiable everywhere, and therefore, we cannot employ a derivative-based algorithms such as Newton s method to solve MPSOCC (3.1. To overcome such a difficulty, we will utilize a smoothing technique. Definition 4.1. Let Ψ : R m R m be a nondifferentiable function. Then, the function Ψ µ : R m R m parametrized by µ > is called a smoothing function of Ψ if it satisfies the following properties: For any µ >, Ψ µ is differentiable on R m ; for any z R m, it holds that lim µ + Ψ µ (z = Ψ(z. A smoothing function of the natural residual function can be constructed by means of the Chen-Mangasarian (CM function ĝ : R R [11]. Definition 4.2. A differentiable convex function ĝ : R R + is called a CM function if lim ĝ(α =, lim α (ĝ(α α =, < α ĝ (α < 1 (α R. (4.1 Notice that, if function p µ : R R is defined by p µ (α := µĝ(α/µ with a CM function ĝ and a positive parameter µ, then it becomes a smoothing function for p(α := max{, α}. Thans to this fact, we can next provide a smoothing function P µ for the projection operator P K. Definition 4.3. Let z R m be an arbitrary vector decomposed as z = (z 1, z 2,, z l R m1 R m2 R m l = R m conforming to the given Cartesian structure of K. For an arbitrary CM function ĝ : R R, let g : R m R m be defined as g(z := g 1 (z 1. g l (z l, (4.2 g i (z := ĝ(λ i1 c i1 + ĝ(λ i2 c i2, where λ ij R and c ij R mi ((i, j {1, 2,, l} {1, 2} are the spectral values and the spectral vectors of subvectors z i with respect to K m i, respectively. Then, the smoothing function P µ : R m R m of P K is given as P µ (z := µg(z/µ. Now, by using the above smoothing function P µ, we can define the smoothing function for the natural residual Φ. Definition 4.4. Let µ > be arbitrary. Let g : R m R m, g i : R m i R m i (i = 1, 2,..., l, and P µ : R m R m be defined as in Definition 4.3. Then, the smoothing function Φ µ : R m R m for the natural residual Φ is given as Φ µ (y, z := y P µ (y z ( y z = y µg µ ( y y 1 µg 1 1 z 1 µ =.. (4.3 ( y y l µg l l z l µ Before closing this subsection, we provide the following propositions which will be used in the subsequent analyses. Proposition 4.1. Let P µ : R m R m be defined as in Definition 4.3, and choose z / bd (K K arbitrarily. Let {z } R m and {µ } R ++ be arbitrary sequences such that z z and µ as. Then, we have P K (z = lim P µ (z. (4.4 7

8 Proof. For simplicity, we consider the case where K = K m. Let z and z be decomposed as z = λ 1 c 1 + λ 2 c 2 and z = λ 1c 1 + λ 2c 2, where λ i, λ i R are spectral values, and ci, c i Rm (i = 1, 2 are spectral vectors of z and z, respectively. Since z / bd (K m K m, P K m is differentiable at z and P K m(z is given as I m (λ 1 >, λ 2 >, λ P K m(z = 2 I m + W (λ 1 <, λ 2 >, λ 2 λ 1 O (λ 1 <, λ 2 <, where W := 1 ( r1 r2 2 r 2 r 1 r 2 r2, (r 1, r 2 := (z 1, z 2 z 2, z := (z 1, z 2 R R m 1. On the other hand, by [11, Proposition 5.2], P µ (z is written as ĝ (z1 /µ I m (z2 =, where P µ (z c µ (z2 = b µ z2 c µ z2 z2 a µ I m 1 + (b µ a µ z 2 z2 a µ = ĝ(λ 2/µ ĝ(λ 1/µ λ 2 /µ λ 1 /µ, b µ = 1 2 c µ = 1 ( ( ( λ ĝ 2 λ ĝ 1 2 µ µ z 2 2 (z 2, ( ( ( λ ĝ 2 λ + ĝ 1, µ µ, z := (z 1, z 2 R R m 1, and ĝ is defined as in Definition 4.3. Note that, from the definition of ĝ, we have ĝ(α α, ĝ( α, ĝ (α 1 and ĝ ( α as α. Then, it follows that 1 ( < λ 1 λ 2 1 ( < λ 1 λ 2 ( < λ 1 λ 2 lim a µ = λ 2 /(λ 2 λ 1 (λ 1 < < λ 2, lim b µ = 1/2 (λ 1 < < λ 2, lim c µ = 1/2 (λ 1 < < λ 2. (λ 1 λ 2 < (λ 1 λ 2 < (λ 1 λ 2 < From this fact, it is not difficult to observe that P K m(z = lim P µ (z. Proposition 4.2. [11, Proposition 5.1] Let Φ µ : R m R m R m and Φ : R m R m R m be defined by (2.1 and (4.3, respectively. Let ρ := ĝ(. Then, for any y, z R n and µ > ν >, we have ρ(µ νe K Φ ν (y, z Φ µ (y, z K, ρµe K Φ(y, z Φ µ (y, z K, where e := (e 1, e 2,..., e l R m 1 R m 2 R m l with e i := (1,,,..., R m i for i = 1, 2,..., l. Proposition 4.3. [11, Corollary 5.3 and Proposition 6.1] Let Φ µ : R m R m R m and Φ : R m R m R m be defined by (2.1 and (4.3, respectively. Let g : R m R m be defined by (4.2. Then, the following statements hold. (a Function g is continuously differentiable and g(z = diag( g 1 (z 1,..., g l (z l R m m is symmetric for any z R m, where the latter matrix denotes the bloc-diagonal matrix with bloc-diagonal elements g i (z i, i = 1, 2,..., l. (b For any y, z R m, we have ( y z y Φ µ (y, z = I m g, z Φ µ (y, z = g µ where I m R m m denotes the identity matrix. ( y z µ, 8

9 (c For any y, z R m, we have O y Φ µ (y, z I m, O z Φ µ (y, z I m, ( y z O g I m. µ Proposition 4.4. Let Φ µ : R m R m R m and Φ : R m R m R m be defined by (2.1 and (4.3, respectively. Let ρ := ĝ(. Then, Φ µ (y, z Φ(y, z 2ρµ for any (µ, y, z R ++ R m R m. Proof. For simplicity, we only consider the case where K = K m. Let Φ(y, z Φ µ (y, z = λ 1 c 1 + λ 2 c 2, where λ i R and c i R m (i = 1, 2 are the spectral values and spectral vectors of Φ(y, z Φ µ (y, z. Since e = c 1 + c 2 and ρµe K m Φ(y, z Φ µ (y, z K m from Proposition 4.2, we have ρµ(c 1 + c 2 K m λ 1 c 1 + λ 2 c 2 K m, which implies < λ 1 λ 2 ρµ. Hence, we obtain Φ(y, z Φ µ (y, z = λ 1 c 1 + λ 2 c 2 λ 1 c 1 + λ 2 c 2 2ρµ, where the first inequality is due to the triangle inequality and < λ 1 λ 2, and the last inequality follows from c 1 = c 2 = 1/ 2 and λ 1 λ 2 ρµ. This completes the proof. Proposition 4.5. Let Φ µ : R m R m R m be defined by (4.3. Then, for any µ > ν > and (y, z R m R m, it holds that Φ ν (y, z 1 Φ µ (y, z 1 mρ(µ ν, where ρ = ĝ(. Proof. We first assume K = K m. From Proposition 4.2, we have ρ(µ νe (Φ ν (y, z Φ µ (y, z K m, (4.5 Φ ν (y, z Φ µ (y, z K m, (4.6 where e = (1,,..., R m. Moreover, for any w = (w 1, w 2,..., w m K m, we have since w 1 w w2 m. Therefore, for each i = 1, 2,..., m, we have w 1 w i (i = 1,..., m, (4.7 ρ(µ ν ( Φ ν (y, z Φ µ (y, z 1 ( Φ ν (y, z Φ µ (y, z i Φ ν (y, z i Φ µ (y, z i, (4.8 where the first inequality holds from (4.3 and (4.5, the second equality holds from (4.6 and (4.7, and the last equality holds from the triangle inequality. Summing up (4.8 for all i, we obtain the desired conclusion. When K = K m 1 K m l, we can prove it in a similar way. 5 Algorithm In this section, we propose an SQP type algorithm for MPSOCC (1.1. The SQP method solves a quadratic programming (QP problem in each iteration to determine the search direction. This method is nown as one of the most efficient methods for solving nonlinear programming problems. In the remainder of the paper, to apply the SQP method, we mainly consider MPSOCC (3.1 equivalent to MPSOCC (1.1. We should notice, however, that the SQP method cannot be applied directly to MPSOCC (3.1, since Φ(y, z is not differentiable everywhere. We thus consider the following problem where the smooth equality constraint Φ µ (y, z = replaces Φ(y, z = in each iteration Minimize f(x, y x,y,z subject to Ax b, z = Nx + My + q, Φ µ (y, z =. (5.1 9

10 Given a current iterate (x, y, z satisfying Ax b and z = Nx + My + q, we then generate the search direction (dx, dy, dz by solving the following QP subproblem, which consists of quadratic and linear approximations of the objective and constraint functions of problem (5.1 with µ = µ, respectively, at (x, y, z : Minimize dx,dy,dz ( dx f(x, y + 1 dx dy dy 2 dz B subject to Adx b Ax, ( N M Im y Φ µ (y, z z Φ µ (y, z dx dy dz dx ( dy = dz Φ µ (y, z where B R (n+2m (n+2m is a positive definite symmetric matrix. In the numerical experiments in Section 7, B will be updated by using the modified Broyden-Fletcher-Goldfarb-Shanno (BFGS formula. Note that the Karush-Kuhn-Tucer (KKT conditions of QP (5.2 can be written as x f(x, y dx N A y f(x, y + B dy + M u + y Φ µ (y, z v + η =, dz I m z Φ µ (y, z ( dx ( (5.3 N M Im y Φ µ (y, z z Φ µ (y, z dy = Φ dz µ (y, z, (b Ax Adx η, where (η, u, v R p R m R m denotes the Lagrange multipliers. For simplicity of notation, we denote w := (x, y, z R n R m R m, dw := (dx, dy, dz R n R m R m. Also, we define the l 1 penalty function by, (5.2 θ µ,α (w := f(x, y + α Φ µ (y, z 1, (5.4 where α > is the penalty parameter. Note that this function has the directional derivative θ µ,α(w; dw for any w and dw. Algorithm 1. Step : Choose parameters δ (,, β (, 1, ρ (, 1, σ (, 1, µ (,, α 1 (, and a symmetric positive definite matrix B R (n+2m (n+2m. Choose w = (x, y, z R n R m R m such that Nx + My + q = z and Ax b. Set :=. Step 1: Solve QP subproblem (5.2 to obtain the optimum dw = (dx, dy, dz and the Lagrange multipliers (η, u, v. Step 2: If dw =, then let w +1 := w, α := α 1 and go to Step 3. Otherwise, update the penalty parameter by { α α := 1 if α 1 v + δ, max{ v (5.5 + δ, α 1 + 2δ} otherwise. Then, set the step size τ := ρ L, where L is the smallest nonnegative integer satisfying the Armijo condition Let w +1 := w + τ dw, and go to Step 3. θ µ,α (w + ρ L dw θ µ,α (w + σρ L θ µ,α (w ; dw. (5.6 Step 3: Terminate if a certain criterion is satisfied. Otherwise, let µ +1 := βµ and update B to determine a symmetric positive definite matrix B +1. Return to Step 1 with replaced by

11 In the remainder of this section, we establish the well-definedness of Algorithm 1. We first show the feasibility of QP subproblem (5.2. In general, a QP subproblem generated by the SQP method may not be feasible, even if the original nonlinear programming problem is feasible. However, in the present case, we can show that QP subproblem (5.2 is always feasible under the Cartesian P property of the matrix M. To this end, the following lemma will be useful. Lemma 5.1. Let M R m m be a Cartesian P matrix. Let H i R mi mi (i = 1, 2,..., l be positive definite matrices with m = l i=1 m i, and H R m m be a bloc diagonal matrix with bloc diagonal elements H i (i = 1,..., l. Then, H + M is nonsingular. Proof. The matrix H + M can easily be shown to be a Cartesian P matrix, which is nonsingular. The next proposition shows the feasibility and solvability of QP subproblem (5.2. In the proof, the matrix plays an important role. ( M I D := m y Φ µ (y, z z Φ µ (y, z Proposition 5.1 (Feasibility of QP subproblem. Let M be a Cartesian P matrix, and {w } be a sequence generated by Algorithm 1. Then, (i Ax b and z = Nx +My +q hold for all, and (ii QP subproblem (5.2 is feasible and hence has a unique solution for all. Proof. Since (i can be shown easily, we only show (ii. Since the objective function of QP (5.2 is strongly convex, it suffices to show the feasibility. We first show that the matrix D defined by (5.7 is nonsingular. Note that, by Proposition 4.3(c, z Φ µ (y, z is nonsigular. Let D be the Schur complement of the matrix z Φ µ (y, z with respect to D, that is, D := M + ( z Φ µ (y, z 1 y Φ µ (y, z ( ( y z 1 ( ( y z = M + g I m g ( = M + diag µ g i ( y i, z i, µ µ 1 I mi where y i, and z i, denote the i-th subvectors of y and z, respectively, conforming to the Cartesian structure of K, and each equality follows from Proposition 4.3. Since M is a Cartesian P matrix and ( g i ((y i, z i, /µ 1 I mi R mi mi is positive definite from Proposition 4.3 (c, Lemma 5.1 ensures that D is nonsingular, and hence D is nonsingular. It then follows from (i that ( ( dy dx =, = D 1 dz Φ µ (y, z comprise a feasible solution to (5.2. This completes the proof. The following proposition shows that the search direction dw produced in Step 1 of Algorithm 1 is a descent direction of the penalty function θ µ,α defined by (5.4. It guarantees the well-definedness of the line search in Step 2 in the sense that there exists a finite L satisfying the Armijo condition (5.6. Proposition 5.2 (Descent direction. Let {w } and {dw } be sequences generated by Algorithm 1. Then, we have (a θ µ,α (w ; dw = x f(x, y dx + y f(x, y dy α Φ µ (y, z 1, (b θ µ,α (w ; dw (dw B dw for each. Moreover, if Φ µ (y, z, then the inequality in (b holds strictly. l i=1, (5.7 11

12 Proof. We first show (a. Let J +, J, and J {1, 2,..., m} be the index sets defined by J + = {j Φ µ (y, z j > }, J = {j Φ µ (y, z j = }, J = {j Φ µ (y, z j < }, where Φ µ (y, z j R denotes the j-th component of Φ µ (y, z R m. Then, we have θ µ,α (w ; dw = f(x, y ( dx dy + α j J + [ Φµ (y, z ] ( dy j dz [ + α Φµ (y, z ] ( dy [ j dz α Φµ (y, z ] ( dy j dz j J j J where [ Φ µ (y, z ] j denotes the j-th column vector of Φ µ (y, z. Since [ Φµ (y, z ] ( dy j dz = Φ µ (y, z j from the constraint of QP subproblem (5.2, we have, (5.8 θ µ,α (w ; dw = x f(x, y dx + y f(x, y dy α Φ µ (y, z 1. (5.9 We next show (b. Taing the inner product of dw = (dx, dy, dz and both sides of the first equality in the KKT conditions (5.3 with dw = dw, η = η, u = u, v = v for the subproblem (5.2, we obtain ( dx f(x, y dy + (dw B dw + (u (Ndx + Mdy dz ( dy + (v Φ µ (y, z dz + (η Adx =. (5.1 Moreover, from the constraints of the subproblem (5.2 and the KKT conditions (5.3, we have and Ndx + Mdy dz =, (5.11 ( dy Φ µ (y, z dz = Φ µ (y, z, (5.12 = (η (b Ax Adx = (η Adx + (η (b Ax (η Adx, (5.13 where the inequality is due to η and b Ax from (5.3. Substituting (5.11 (5.13 into (5.1, we have ( dx f(x, y dy + (dw B dw (v Φ µ (y, z. Furthermore, from (5.9, we obtain θ µ,α (w ; dw (dw B dw + (v Φ µ (y, z α Φ µ (y, z 1 = (dw B dw + (vj α [ Φ µ (y, z ] j j J + + (vj + α [ Φ µ (y, z ] j j J (dw B dw, where the last inequality follows from α > v and the definitions of J + and J. Moreover, if Φ µ (y, z, then the last inequality holds strictly since J + J. This completes the proof of (b. 12

13 6 Convergence analysis In this section, we study the convergence property of the proposed algorithm. To begin with, we mae the following assumption. Assumption 1. Let sequences {w } and {B } be produced by Algorithm 1. (a {w } is bounded. (b There exist constants γ 1, γ 2 > such that γ 1 d 2 d B d γ 2 d 2 for any d R n+2m and. (c There exists a constant c > such that D 1 c for any, where D is the matrix defined by (5.7. Assumption 1 (b means that {B } is bounded and uniformly positive definite. Assumption 1 (c holds if and only if any accumulation point of {D } is nonsingular. The next proposition provides a sufficient condition under which Assumption 1 (c holds. Proposition 6.1. Suppose that M is a Cartesian P matrix and Assumption 1 (a holds. Then, Assumption 1 (c holds. Proof. Let (E y, E z be an arbitrary accumulation point of {( y Φ µ (y, z, z Φ µ (y, z }. Then, it suffices to show that the matrix ( M Im D := is nonsingular. By Proposition 4.3, E y and E z satisfy the following three properties: (a E y + E z = I m ; (b E y I m and E z I m ; E y E z (c E y and E z are symmetric and have the bloc-diagonal structure conforming to the Cartesian structure of K = K m 1 K m l. From (a and (b, there exists an orthogonal matrix H R m m, HE y H = diag(α i m i=1, HE z H = diag(1 α i m i=1, α i 1 (i = 1, 2,..., m, (6.1 where α i (i = 1, 2,..., m are the eigenvalues of E y. Moreover, from (c, H has the same bloc-diagonal structure as E y and E z. Hence, M := HMH is a Cartesian P matrix by Proposition 2.2. Now, let D R 2m 2m be defined as ( ( ( H H D := D H H = M I m diag(α i m i=1 diag(1 α i m i=1 and let (ζ, η R m R m be an arbitrary vector such that D ( ζη =. Then, we have, (6.2 Mζ = η, (6.3 α i ζ i + (1 α i η i = (i = 1, 2,..., m. (6.4 If α i =, then we have ( Mζ i = η i = from (6.3 and (6.4. If α i = 1, then we have ζ i = from (6.4. If < α i < 1, then we have ζ i ( Mζ i = ζ i η i = α i (1 α i 1 ζ 2 i. Thus, for all i, we have ζ i( Mζ i. Since M is a Cartesian P matrix and every Cartesian P matrix is a P matrix, we must have ζ = η =. Hence, D is nonsingular. From (6.2 and the nonsingularity of H, matrix D is also nonsingular. The following three lemmas play crucial roles in establishing the convergence theorem for the algorithm. 13

14 Lemma 6.1. Let {w } be a sequence generated by Algorithm 1, and dw := (dx, dy, dz R n R m R m be the unique optimum of QP subproblem (5.2 for each. Let {(µ, τ } R ++ R ++ be a sequence converging to (, and α > be a fixed scalar. Suppose that {w } satisfies Assumption 1(a. In addition, assume that {dw } has an accumulation point, and w := (x, y, z and dw := (dx, dy, dz are arbitrary accumulation points of {w } and {dw }, respectively. Then we have provided y z / bd (K K and dw. ( θµ,α(w + τ dw θ µ,α(w lim sup θ µ τ,α(w ; dw Proof. Taing subsequences if necessary, we may suppose w w and dw dw. Moreover, we have θ µ,α(w ; dw = x f(x, y dx + y f(x, y dy α Φ µ (y, z 1 as shown in (5.9. Thus, we have only to show lim sup ( θµ,α(w + τ dw θ µ,α(w From the mean-value theorem and the continuity of f, we have Therefore, it suffices to show that τ ( dx f(x, y α Φ(y, z dy 1. f(x + τ dx, y + τ dy f(x, y lim = f(x, y τ lim sup ( dx. dy Φ µ (y + τ dy, z + τ dz j Φ µ (y, z j τ Φ(y, z j (6.5 for each j. By Definition 5 and Proposition 4.3, we have y Φ µ (y, z = I m P µ (y z and z Φ µ (y, z = P µ (y z, which together with the constraints of QP subproblem (5.2 yield Hence, we have Φ µ (y, z = y Φ µ (y, z dy + z Φ µ (y, z dz = (I P µ (y z dy + P µ (y z dz = dy P µ (y z (dy dz. (6.6 Φ µ (y + τ dy, z + τ dz Φ µ (y, z = τ dy ( P µ (y + τ dy (z + τ dz P µ (y z = τ ( Φ µ (y, z + P µ (y z (dy dz P µ (y z + τ (dy dz + P µ (y z = τ Φ µ (y, z + τ δ, (6.7 where the first equality is due to (4.3, the second equality follows from (6.6, and δ := (δ 1, δ 2,..., δ m R m is given by δj := P µ (y z j (dy dz τ 1 ( Pµ (y z + τ (dy dz j P µ (y z j. (6.8 To show (6.5, we consider three cases: (i Φ(y, z j =, (ii Φ(y, z j > and (iii Φ(y, z j <. In case (i, we first notice that Φ µ (y + τ dy, z + τ dz j Φ µ (y, z j τ Φ µ (y + τ dy, z + τ dz j Φ µ (y, z j τ = Φ µ (y, z j δ j, (6.9 14

15 where the equality follows from (6.7. By applying the mean-value theorem in (6.8, we can find ζ j (, 1 for each such that δ j = ( P µ (y z P µ ( y z + ζ j τ (dy dz j (dy dz. (6.1 Since Proposition 4.1 and the boundedness of {dw } imply lim P µ (y z P µ (y z + ζ j τ (dy dz =, (6.11 we obtain lim δj =. Moreover, by Proposition 4.4, we have lim Φ µ (y, z j = Φ(y, z j =. Then, by letting in (6.9, we obtain (6.5. In case (ii and case (iii, we have and Φ µ (y + τ dy, z + τ dz j Φ µ (y, z j = Φ µ (y j, z j + δ j Φ µ (y + τ dy, z + τ dz j Φ µ (y, z j = Φ µ (y j, z j δ j, respectively. Then, a similar argument to that in case (i leads to the desired inequality (6.5. Lemma 6.2. Let {w } be a sequence generated by Algorithm 1. Suppose that Assumption 1 holds. Then, we have the following statements. (i {dw } and {(u, v } are bounded. (ii There exists such that α = α for all. (iii The sequences {θ µ,α (w } and {θ µ,α (w +1 } converge to the same limit. Proof. We first prove (i. Let ( dỹ d z ( := D 1 Φ µ (y, z (6.12 and d w := (, dỹ, d z, where D is defined by (5.7. Then, d w is a feasible point of QP subproblem (5.2. Note that the objective function of QP subproblem (5.2 is rewritten as 1 2 (dw B 1 g B (dw B 1 g + constant, where g := ( f(x, y,. Since dw is the optimum of QP subproblem (5.2, we have (d w B 1 g B (d w B 1 This together with Assumption 1 (b implies g (dw B 1 g B (dw B 1 g. γ 2 d w B 1 g 2 γ 1 dw B 1 g 2. (6.13 Now, notice that {d w } is bounded from (6.12, Assumption 1 (a, (c and Proposition 4.4. In addition, {B 1 } and {g } are also bounded from Assumption 1 (a, (b. Thus, we have the boundedness of {dw } from (6.13. On the other hand, from the first equality of the KKT conditions (5.3, we have ( u v = (D 1 (( y f(x, y + B dw, where B is the 2m (n + 2m matrix consisting of the last 2m rows of B. This equation together with Assumption 1 and the boundedness of {dw } yields the boundedness of {(u, v }. We next prove (ii. From the update rule (5.5, we can easily see that {α } is nondecreasing. Moreover, if v > α 1 δ, (6.14 then we have α = max{ v + δ, α 1 + 2δ} α 1 + 2δ, that is, α increases at least by 2δ at a time. Let ˆK := { v > α 1 δ}. If ˆK =, then α as and hence { v } is unbounded from (6.14. However this contradicts (i. Thus we have (ii. 15

16 We finally show (iii. Since we have (ii, there exist α and such that α = α for all. In what follows, we suppose. Since µ +1 µ, Proposition 4.5 together with (5.4 implies θ µ+1,α(w +1 + αmρµ +1 θ µ,α(w +1 + αmρµ (6.15 θ µ,α(w + αmρµ, (6.16 where the last inequality follows from the Armijo condition (5.6 and Proposition 5.2 (b. From (6.16, {θ µ,α(w +αmρµ } is a monotonically nonincreasing sequence. In addition, {θ µ,α(w +αmρµ } is bounded, since {θ µ,α(w } is bounded from Assumption 1 (a. Therefore, {θ µ,α(w + αmρµ } is convergent. This fact together with lim µ = and (6.16 yields that {θ µ,α(w +1 } and {θ µ,α(w } must converge to the same limit. Finally, we show that a sequence generated by the algorithm globally converges to B-stationary point of MPSOCC (3.1 under the assumption that any accumulation point w = (x, y, z satisfies y z / bd (K K, which is equivalent to the nondegeneracy condition given in Definition 3.1 if y and z satisfy the SOC complementarity condition. Theorem 6.1. Let {w } be a sequence generated by Algorithm 1. Suppose that {w } satisfies Assumption 1. Let w = (x, y, z be an arbitrary accumulation point of {w }. If (y, z satisfies y z / bd (K K, then w is a B-stationary point of MPSOCC (1.1. Proof. By Lemma 6.2(ii, there exists some constant α > and such that α = α for all. In this proof, we suppose without loss of generality that α = α holds for all. We first show that lim dw =. (6.17 From Proposition 5.2 (b and Assumption 1 (b, we have which together with Armijo condition (5.6 yields Hence, from Lemma 6.2 (iii, we obtain θ µ,α(w ; dw (dw B dw γ 1 dw 2, (6.18 θ µ,α(w +1 θ µ,α(w + στ θ µ,α(w ; dw θ µ,α(w γ 1 στ dw 2. lim τ dw 2 =. Now, assume for contradiction that (6.17 does not hold. Then, there exists an infinite index set K {, 1,...} such that and hence lim dw >, (6.19 K lim τ =. K Let l be the smallest nonnegative integer L satisfying (5.6, i.e., ρ l = τ. Then, from the definition of l, we have θ µ,α(w + ρ l 1 dw > θ µ,α(w + σρ l 1 θ µ,α(w ; dw, which implies ξ := θ µ,α(w + ρ l 1 dw θ µ,α(w ρ l 1 θ µ,α(w ; dw > (1 σθ µ,α(w ; dw. (6.2 By Lemma 6.1 together with lim, K ρ l 1 = and y z / bd (K K, we have lim sup, K ξ. Moreover, we have from (6.18 (1 σθ µ,α(w ; dw (1 σγ 1 dw 2. (

17 From (6.2 and (6.21, we must have lim, K dw =. However, this contradicts (6.19, and hence we have (6.17. Next, we show that w satisfies the KKT conditions (3.2 of MPSOCC (3.1. Let {(η, u, v } be the sequence of multipliers corresponding to {dw }. Then, {(u, v } is bounded from Lemma 6.2 (i. Hence, there exist vectors u, v and an index set K such that lim, K (w, u, v = (w, u, v. By (5.3 and letting X := {x R n Ax b} we have ζ N X (x {} 2m, (6.22 ( dx ( N M Im y Φ µ (y, z z Φ µ (y, z dy = dz Φ µ (y, z, (6.23 b Ax Adx, (6.24 where ζ := x f(x, y dx N y f(x, y + B dy + M u + y Φ µ (y, z v, (6.25 dz I m z Φ µ (y, z and (6.22 follows from the first equation of (5.3 with N X (x = {A η η, η (Ax b = }. By letting K tend to in (6.23 and (6.24, we have b Ax, Φ(y, z =. (6.26 Note that (6.26 implies x X. In addition, note that {ζ } K is a convergent sequence satisfying (6.22, since {B } is bounded from Assumption 1 and lim Φ µ (y, w = Φ(y, w from the nondegeneracy condition. Then, these facts together with (6.17 and the closedness of the point-to-set map N X ( yield x f(x, y N T y f(x, y + M T u + y Φ(y, z v = lim N, K I m z Φ(y, z ζ X (x {} 2m. This together with (6.26 means that w satisfies the KKT conditions (3.2 of MPSOCC (3.1. By Proposition 3.2, w is a B-stationary point of MPSOCC ( Numerical experiments In this section, we implement Algorithm 1 for solving problem (1.1 and report some numerical results. The program is coded in Matlab 28a and run on a machine with an IntelRCore2 Duo E685 3.GHz CPU and 4GB RAM. In Step of the algorithm, we set the parameters as δ := 1, α 1 := 1, σ := 1 3, ρ :=.9. The choice of smoothing parameters {µ }, i.e., µ and β, and a starting point w vary with the experiment. We let B be the identity matrix, and update B by the modified BFGS formula: B +1 := B B s (B s (s B s + ζ (ζ (s ζ with s = w +1 w and ζ = θ ζ +(1 θ B s, where ζ := w L µ+1 (w +1, u, v, η w L µ (w, u, v, η, L µ denotes the Lagrangian function defined by L µ (w, u, v, η := f(x, y + Φ µ (y, zv + (Nx + My + q zu + (Ax bη, and θ is determined by 1 if (s ζ.2(s B s θ :=.8(s B s (s (B s ζ otherwise. 17

18 In Step 1, we use the quadprog solver in Matlab Optimization Toolbox for solving the QP subproblems. In Step 3, we terminate the algorithm if the following condition is satisfied: Φ(y, z + dw 1 7. (7.1 The rationale for using (7.1 is as follows. If Φ(y, z =, i.e., K y z K holds, then w = (x, y, z is feasible to MPSOCC (1.1, since the remaining constraints Ax b and z = Nx + My + q always hold from Proposition 5.1. Moreover, if dw =, then w satisfies the KKT conditions (3.2 of MPSOCC (3.1. Thus, by Theorem 6.1, Φ(y, z + dw = indicates that w is a B-stationary point under the assumption y z / bd (K K, which is equivalent to the nondegeneracy condition in Definition 3.1 if K y z K. Hence, (7.1 is appropriate for a stopping criterion of the algorithm. As the CM function, we choose ĝ(α := ((α /2 + α/2. Experiment 1 In the first experiment, we solve the following test problem of the form (1.1: Minimize x 2 + y 2 subject to Ax b, z = Nx + My + q, K y z K, (7.2 where (x, y, z R 1 R m R m, and each element of A R 1 1, N R m 1 is randomly chosen from [ 1, 1]. Moreover, each element of b R 1 is randomly chosen from [, 1]. In addition, M R m m is a positive semidefinite symmetric matrix generated by M = M 1 M1 +.1I, and M 1 R m m is a matrix whose entries are randomly chosen from [ 1, 1]. The vector q R m is set to be q := ξ z Mξ y with ξ y R m and ξ z R m, whose components are randomly chosen from [ 1, 1]. We choose different Cartesian structures for K, and generate 5 problems for each K. In applying Algorithm 1, we set an initial point w = (x, y, z := (, ξ y, ξ z R 1 R m R m, so that Ax b and z = Nx + My + q are satisfied. We choose smoothing parameters {µ } as µ := 1.8. The obtained results are shown in Tables 1 and 2, where (K ν κ := K ν K ν K ν R νκ and each column represents the following: ite: the average number of iterations among 5 test problems for each K; cpu(s: the average cpu-time in second among 5 test problems for each K; non(%: percentage of test problems whose solutions obtained by Algorithm 1 satisfy the nondegeneracy condition in Definition 3.1. Recall that convergence to a B-stationary point is proved under the nondegeneracy condition. Hence, the value of non represents the percentage of problems for which the algorithm successfully finds B-stationary points. From Table 1, we can observe that ite does not change so much although the values of cpu(s tends to be larger as m increases. From Table 2, we can see that non(% tends to be less than 1 if K includes K 1 or K 2. Indeed, when K = (K 1 1 and (K 2 5, the values of non(% is 74 and 86, respectively, whereas it becomes 1 when the dimension of all SOCs in K is larger than 1. 18

19 m K ite cpu(s non(% 1 K K K K K K K K K K Table 1: Results for problems with a single SOC complementarity constraint (Experiment 1 m K ite cpu(s non(% 1 K (K K 5 K 2 K (K K 5 K 2 (K 1 2 K 5 (K (K (K Table 2: Results for problems with multiple SOC complementarity constraints (Experiment 1 Experiment 2. In the second experiment, we apply Algorithm 1 to a bilevel programming problem with a robust optimization problem in the lower-level. Bilevel programming has wide application such as networ design and production planning [2, 9]. On the other hand, robust optimization is nown to be a powerful methodology to treat optimization problems with uncertain data [3, 4]. In this experiment, we solve the following problem: with Minimize x (x,y R 4 R Cy i=1 x i subject to x i 5 (i = 1, 2, 3, 4, 1 x 1 + 2x 2 + x 4 3, 1 x 2 + x 3 x 4 2, y solves P (x, P (x : Minimize max y R 4 x U (x x y + 1 r 2 y My, where r is an uncertainty parameter, U r (x R 4 is an uncertainty set defined by U r (x := { x R 4 x x r}, and M :=, C := For solving problem (7.3, we introduce an auxiliary variable γ R to reformulate the lower-level minimax problem P (x as the following SOCP: 1 Minimize (γ,y R R 4 2 y My + x y + rγ ( γ subject to K y 5. (7.3 19

20 Furthermore, the above SOCP can be rewritten as the following SOC complementarity problem: ( ( γ r K 5 K 5. y My + x Thus, we can convert problem (7.3 to the following problem: Minimize x (x,y,z,γ R 4 R 4 R 5 R Cy i=1 x i subject to x i 5 (i = 1, 2, 3, 4, 1 x 1 + 2x 2 + x 4 3, 1 x 2 + x 3 x 4 2, z = ( r My + x, K 5 ( γ z K y 5, (7.4 which is of the form (1.1. For the sae of comparison, problem (7.4 is solved not only by Algorithm 1, but also by the smoothing method [2], which is described as follows: Smoothing method Step. Choose a positive sequence {τ l } such that τ l. Set l :=. Step 1. Find a stationary point w l = (x l, y l, z l of the smoothed problem (5.1 with µ = τ l. Step 2. If w l is feasible for MPSOCC (1.1, then stop. Otherwise, set l := l + 1 and go to Step 1. Each algorithm is implemented in the following way: In Step of Algorithm 1, we set smoothing parameters µ :=.8 ( and a starting point x := (1, 1, 1, 1, (γ, y, z := (,, R R 4 R 5. The choice of the other parameters is the same as in Experiment 1. In Step of the smoothing method, we set τ l :=.8 l for l. In Step 1, for solving problem (5.1 with µ = τ l, we apply Algorithm 1 with slight modification, where the smoothing parameter µ is fixed to τ l for all and the termination criterion is replaced by dw 1 7. In Step 2, we stop the smoothing method when Φ(y l, z l 1 7 is satisfied. *2 We then test both the methods to problem (7.4 with r =.2,.4,.6,.8 and.1. The obtained results are shown in Tables 3 and 4, whose columns represent the following: (x, y, γ : the value of x, y, γ obtained by Algorithm 1; (λ 1, λ 2: spectral values of ( γ y +z with respect to K 5 defined as in Definition 2.1, where z := (r, M( γ y + x ; ite out : the number of outer iterations; QP: the number of QP-subproblems (5.2 solved in each trial. From Table 3, for all r, we can observe (λ 1, λ 2 >, which means ( γ y + z int K 5, i.e., the nondegeneracy condition holds at the obtained solution. Hence, Algorithm 1 finds a B-stationary point of problem (7.4 successfully. From Table 4, we cannot find a significant difference between the values of ite out for the two methods. However, it is observed that the value of QP in the smoothing method tends to be much larger than that in Algorithm 1. Indeed, when r =.2, the smoothing method has QP = 218, which is almost five times larger than QP = 44 in Algorithm 1. This fact suggests that the smoothing method needs to solve a number of QP-subproblems in Step 1 for solving each smoothed problem (5.1 with fixed µ, while Algorithm 1 only solves one QP subproblem (5.2 for each smoothed problem (5.1. As a result, the computational cost in the smoothing method tends to be larger than Algorithm 1. *2 The remaining constraints Ax l b and z l = Nx l + My l + q are automatically satisfied since w l is feasible to the smoothed problem (5.1. 2

21 r x y γ (λ 1, λ 2.2 (.9236,.9618,.382,. (.6152,.112,.915, (.4, (.9495,.9747,.253,. (.617,.116,.95, (.8, (.9754,.9877,.123,. (.6189,.192,.985, (.12, (1.21, 1.7,.,.7 (.6218,.184,.121, (.16, (1.416, 1.139,.,.139 (.6356,.1123,.144, (.2, Table 3: Results for Algorithm 1 (Experiment 2 Algorithm 1 smoothing method r cpu(s ite out QP cpu(s ite out QP Table 4: Comparison of Algorithm 1 and the smoothing method 8 Conclusion In this paper, we have considered the mathematical program with second-order cone (SOC complementarity constraints. We have proposed an algorithm based on the smoothing and the sequential quadratic programming (SQP methods, in which we replace the SOC complementarity constraints with smooth equality constraints by means of the natural residual and its smoothing function, and apply the SQP method while decreasing the smoothing parameter gradually. We have shown that the proposed algorithm possesses the global convergence property under the Cartesian P and the nondegeneracy assumptions. We have further confirmed the efficiency of the algorithm through numerical experiments. References [1] F. Alizadeh and D. Goldfarb, Second-order cone programming, Mathematical Programming, 95 (23, pp [2] J. F. Bard, Practical Bilevel Optimization, Kluwer Academic Publishers, The Netherlands (1998. [3] A. Ben-Tal and A. Nemirovsi, Robust convex optimization, Mathematics of Operations Research, 23 (1998, pp [4] A. Ben-Tal and A. Nemirovsi, Robust solutions of uncertain linear programs, Operations Research Letters, 25 (1999, pp [5] J.-S. Chen, X. Chen and P. Tseng, Analysis of nonsmooth vector-valued functions associated with secondorder cones, Mathematical Programming, 11 (24, pp [6] X. Chen and H. D. Qi, Cartesian P-property and its applications to the semidefinite linear complementarity problem, Mathematical Programming, 16 (26, pp [7] X. D. Chen, D. Sun and J. Sun, Complementarity functions and numerical experiments on some smoothing Newton methods for second-order-cone complementarity problems, Computational Optimization and Applications, 25 (23, pp [8] R. W. Cottle, J.-S. Pang and R. E. Stone, The Linear Complementarity Problem, Academic Press, New Yor, [9] S. Dempe, Foundations of Bilevel Programming, Kluwer Academic Publishers, The Netherlands (22. 21

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Manual of ReSNA matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Shunsuke Hayashi September 4, 2013 1 Introduction ReSNA (Regularized

More information

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2 A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National

More information

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite

More information

GENERALIZED second-order cone complementarity

GENERALIZED second-order cone complementarity Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

Error bounds for symmetric cone complementarity problems

Error bounds for symmetric cone complementarity problems to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization A New Sequential Optimality Condition for Constrained Nonsmooth Optimization Elias Salomão Helou Sandra A. Santos Lucas E. A. Simões November 23, 2018 Abstract We introduce a sequential optimality condition

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity Preprint ANL/MCS-P864-1200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

A smoothing augmented Lagrangian method for solving simple bilevel programs

A smoothing augmented Lagrangian method for solving simple bilevel programs A smoothing augmented Lagrangian method for solving simple bilevel programs Mengwei Xu and Jane J. Ye Dedicated to Masao Fukushima in honor of his 65th birthday Abstract. In this paper, we design a numerical

More information

Robust Nash Equilibria and Second-Order Cone Complementarity Problems

Robust Nash Equilibria and Second-Order Cone Complementarity Problems Robust Nash Equilibria and Second-Order Cone Complementarit Problems Shunsuke Haashi, Nobuo Yamashita, Masao Fukushima Department of Applied Mathematics and Phsics, Graduate School of Informatics, Koto

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point

More information

Sparse Approximation via Penalty Decomposition Methods

Sparse Approximation via Penalty Decomposition Methods Sparse Approximation via Penalty Decomposition Methods Zhaosong Lu Yong Zhang February 19, 2012 Abstract In this paper we consider sparse approximation problems, that is, general l 0 minimization problems

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI

496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI Journal of Computational Mathematics, Vol., No.4, 003, 495504. A MODIFIED VARIABLE-PENALTY ALTERNATING DIRECTIONS METHOD FOR MONOTONE VARIATIONAL INEQUALITIES Λ) Bing-sheng He Sheng-li Wang (Department

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1 Exact penalty decomposition method for zero-norm minimization based on MPEC formulation Shujun Bi, Xiaolan Liu and Shaohua Pan November, 2 (First revised July 5, 22) (Second revised March 2, 23) (Final

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

SUCCESSIVE LINEARIZATION METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMS 1. Preprint 252 August 2003

SUCCESSIVE LINEARIZATION METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMS 1. Preprint 252 August 2003 SUCCESSIVE LINEARIZATION METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMS 1 Christian Kanzow 2, Christian Nagel 2 and Masao Fukushima 3 Preprint 252 August 2003 2 University of Würzburg Institute of Applied

More information

Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints

Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints Comput Optim Appl (2009) 42: 231 264 DOI 10.1007/s10589-007-9074-4 Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints A.F. Izmailov M.V. Solodov Received:

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

Interior Methods for Mathematical Programs with Complementarity Constraints

Interior Methods for Mathematical Programs with Complementarity Constraints Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Complementarity Formulations of l 0 -norm Optimization Problems 1 Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Abstract: In a number of application areas, it is desirable to

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

The Karush-Kuhn-Tucker (KKT) conditions

The Karush-Kuhn-Tucker (KKT) conditions The Karush-Kuhn-Tucker (KKT) conditions In this section, we will give a set of sufficient (and at most times necessary) conditions for a x to be the solution of a given convex optimization problem. These

More information

2 ReSNA. Matlab. ReSNA. Matlab. Wikipedia [7] ReSNA. (Regularized Smoothing Newton. ReSNA [8] Karush Kuhn Tucker (KKT) 2 [9]

2 ReSNA. Matlab. ReSNA. Matlab. Wikipedia [7] ReSNA. (Regularized Smoothing Newton. ReSNA [8] Karush Kuhn Tucker (KKT) 2 [9] c ReSNA 2 2 2 2 ReSNA (Regularized Smoothing Newton Algorithm) ReSNA 2 2 ReSNA 2 ReSNA ReSNA 2 2 Matlab 1. ReSNA [1] 2 1 Matlab Matlab Wikipedia [7] ReSNA (Regularized Smoothing Newton Algorithm) ReSNA

More information

A Local Convergence Analysis of Bilevel Decomposition Algorithms

A Local Convergence Analysis of Bilevel Decomposition Algorithms A Local Convergence Analysis of Bilevel Decomposition Algorithms Victor DeMiguel Decision Sciences London Business School avmiguel@london.edu Walter Murray Management Science and Engineering Stanford University

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

1. Introduction. We consider mathematical programs with equilibrium constraints in the form of complementarity constraints:

1. Introduction. We consider mathematical programs with equilibrium constraints in the form of complementarity constraints: SOME PROPERTIES OF REGULARIZATION AND PENALIZATION SCHEMES FOR MPECS DANIEL RALPH AND STEPHEN J. WRIGHT Abstract. Some properties of regularized and penalized nonlinear programming formulations of mathematical

More information

Gerd Wachsmuth. January 22, 2016

Gerd Wachsmuth. January 22, 2016 Strong stationarity for optimization problems with complementarity constraints in absence of polyhedricity With applications to optimization with semidefinite and second-order-cone complementarity constraints

More information

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity On smoothness properties of optimal value functions at the boundary of their domain under complete convexity Oliver Stein # Nathan Sudermann-Merx June 14, 2013 Abstract This article studies continuity

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem Math. Program., Ser. B 104, 93 37 005 Digital Object Identifier DOI 10.1007/s10107-005-0617-0 Jein-Shan Chen Paul Tseng An unconstrained smooth minimization reformulation of the second-order cone complementarity

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics

More information

An improved generalized Newton method for absolute value equations

An improved generalized Newton method for absolute value equations DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

New Infeasible Interior Point Algorithm Based on Monomial Method

New Infeasible Interior Point Algorithm Based on Monomial Method New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

Complementarity Formulations of l 0 -norm Optimization Problems

Complementarity Formulations of l 0 -norm Optimization Problems Complementarity Formulations of l 0 -norm Optimization Problems Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter May 17, 2016 Abstract In a number of application areas, it is desirable

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1. ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS R. ANDREANI, E. G. BIRGIN, J. M. MARTíNEZ, AND M. L. SCHUVERDT Abstract. Augmented Lagrangian methods with general lower-level constraints

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Complementarity Formulations of l 0 -norm Optimization Problems

Complementarity Formulations of l 0 -norm Optimization Problems Complementarity Formulations of l 0 -norm Optimization Problems Mingbin Feng, John E. Mitchell,Jong-Shi Pang, Xin Shen, Andreas Wächter Original submission: September 23, 2013. Revised January 8, 2015

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information