On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

Size: px
Start display at page:

Download "On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem"

Transcription

1 On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University February 00

2 Abstract The Second-Order Cone Complementarity Problem SOCCP is a wide class of problems, which includes the Nonlinear Complementarity Problem NCP and the Second-Order Cone Programming Problem SOCP. Recently, Fukushima, Luo and Tseng extended some merit functions and their smoothing functions for NCP to SOCCP. Moreover, they derived computable formulas for the Jacobians of the smoothing functions and gave the conditions for the Jacobians to be invertible. In this paper, we focus on a merit function for SOCCP, and show that the merit function is coercive under the condition that the function involved in SOCCP is strongly monotone. Furthermore, we propose a globally convergent algorithm, which is based on smoothing and regularization methods, for solving merely monotone SOCCP, and examine its effectiveness by means of numerical experiments.

3 Contents Introduction Preliminaries 3. Spectral Factorization Merit Function Coerciveness of Merit Function with Natural Residual 5 4 Algorithm for Solving SOCCP 5 4. Smoothing and Regularization Methods Stationary Point of Smoothing Function Globally Convergent Algorithm Numerical Experiments 0 5. Newton s Method Results of Experiments Final Remarks 5

4 Introduction In this paper, we consider the second-order cone complementarity problem SOCCP [8]: Find x, y, ζ R n R n R l such that x K, y K, x T y = 0, F x, y, ζ = 0,. where F : R n R n R l R n R l is a continuously differentiable mapping, and K R n is the direct product of second-order cones, that is, K = K n K n K nm with n = n + + n m and n i -dimensional second-order cones K n i R n i defined by K n i = {z }, z R R n i z z.. Here and throughout denotes the Euclidean norm. The SOCCP includes a wide class of problems such as the Nonlinear Complementarity Problem NCP and the Second-Order Cone Programming Problem SOCP [0]. To see this, consider the SOCCP where n = n = = n m = and F x, y, ζ = fx y with f : R n R n. Then this SOCCP is reduced to the NCP : Find x R n such that On the other hand, consider the SOCP : x 0, fx 0, x T fx = 0. Minimize θz subject to γz K,.3 where θ : R s R and γ : R s R t. SOCP.3 can be written as Then the Karush-Kuhn-Tucker KKT conditions of θz γzλ = 0, λ K, γz K, λ T γz = 0,.4 where λ R t is the Lagrange multiplier vector. Then, by setting n = t, l = s, x = λ, y = µ, ζ = z and µ γz F x, y, ζ =, θz γzλ the KKT conditions can be reduced to the form of SOCCP.. For solving SOCCP, Fukushima, Luo and Tseng [8] reformulated SOCCP as an equivalent system of equations, and consider the smoothing methods for them. In this paper, instead of an equivalent system of equations, we focus on the unconstrained optimization problem equivalent to SOCCP. : Minimize Θx, y, ζ,.5

5 where Θ is a function from R n R n R l into R. The objective function Θ is called a merit function for SOCCP.. In order to construct a merit function for the SOCCP, it is convenient to introduce a function Φ : R n R n R n satisfying Φx, y = 0 x K, y K, x T y = 0..6 By using such a function, SOCCP. can be rewritten as the following equivalent system of equations : Φx, y = 0, F x, y, ζ = 0. Now we define the function Ψ : R n R n R l R by Ψx, y, ζ = Φx, y + F x, y, ζ..7 Then, it is easy to see that Ψx, y, ζ 0 for any x, y, ζ R n R n R l, and that Ψx, y, ζ = 0 if and only if x, y, ζ is a solution of SOCCP.. Therefore, the function Ψ defined by.7 can serve as a merit function for SOCCP.. Fukushima, Luo and Tseng [8] showed that the natural residual function satisfying.6 for the NCP can be extended to the SOCCP by means of Jordan algebra. However, they did not consider the merit function Ψ explicitly, and there remains much to study on it. One of the important questions is under what conditions Ψ is coercive. The function Ψ : R n R n R l R is said to be coercive, if lim Ψx, y, ζ =. x,y,ζ The coerciveness of the merit function is an important property from the viewpoint of optimization. For example, if the merit function Ψ is coercive, then the level sets L α = {x, y, ζ Ψx, y, ζ α} are bounded for all α R, which guarantees that a sequence generated by an appropriate descent algorithm applied to the problem.5 has an accumulation point. One of the main purposes of this paper is to give a condition for the merit function Ψ given by.7 with the natural residual to be coercive. More precisely, we will show that the merit function is coercive when F is strongly monotone. It is well-known that sufficient conditions for coerciveness of merit functions are the strong monotonicity and Lipschitz continuity of F []. The result of this paper weakens the conditions. Another main purpose of this paper is to develop an algorithm for solving the SOCCP. In order to solve the equivalent minimization problem efficiently, we propose to combine a smoothing method with a regularization method. Smoothing methods, which aim to handle nondifferentiablity of functions, have been developed for solving various kinds of complementarity problems [, 3, 4, 9, 3, 4]. On the other hand, regularization methods are used to deal with ill-posed problems [5, 6]. In this paper, we propose a hybrid method based on these two methods and show that it is globally convergent under the monotonicity assumption on the problem. The paper is organized as follows. In Section, we introduce the spectral factorization for second-order cones, which plays a key role in analyzing the properties of merit functions for the SOCCP. Moreover, we construct a merit function Ψ by using the natural residual for the SOCCP. In Section 3, we show that the merit function Ψ defined in terms of the natural residual

6 is coercive, provided that the function involved in the SOCCP is strongly monotone. In section 4, we propose an algorithm for solving the SOCCP and show that it has global convergence for monotone SOCCPs. In Section 5, we present some numerical results with the proposed algorithm. In Section 6, we conclude the paper with some remarks. Preliminaries. Spectral Factorization In this section, we briefly review some properties of the spectral factorization with respect to a second-order cone, which will be used in the subsequent analysis. Spectral factorization is one of the basic issues of Jordan algebra. For more detail, see [7, 8]. For any vector x = x, x R R n n, its spectral factorization with respect to the second-order cone K n is defined as where κ and κ are spectral values given by and v and v are spectral vectors given by v i = x = κ v + κ v, κ i = x + i x, i =,,., i x x x 0,, i ω x = 0, i =,,. with ω R n such that ω =. The spectral values and vectors have the following properties. For any x R n, the inequality κ κ holds, and κ 0 x K n..3 Moreover, for any x R n, we have v i = / for i =,, and v T v = 0. For any x R n, let [x] Kn + denote the projection of x onto the second-order cone K n, that is, [x] Kn + = arg min y x. n We often denote [x] Kn + as [x] + when K n is clear from the context. In particular, when n =, y K [α] + = max0, α.4 for any α R. For n, the projection [ ] + can be also calculated easily as shown in the following proposition. Proposition. [8, Proposition 3.3] For any x R n n, [x] + = [κ ] + v + [κ ] + v, where κ and κ are the spectral values of x defined by., and v and v are the spectral vectors of x defined by.. 3

7 From Proposition.,.,.3 and.4, we can easily verify that x K n [x] + = x,.5 x K n [x] + = 0,.6 x / K n K n = [x] + = κ v..7 These relation will be useful in showing the coerciveness of a merit function based on the natural residual.. Merit Function In this section, we aim to construct a merit function for SOCCP. by using the natural residual ϕ NR : R n R n R n defined in [8]. Firstly, we note that the complementarity conditions on K can be decomposed into those conditions on K n i. Proposition. Let K = K n K nm, x = x,..., x m R n R nm and y = y,..., y m R n R nm. Then the following relation holds : x K, y K, x T y = 0 if and only if x i K n i, y i K n i, x it y i = 0 i =,..., m. Proof. Since if part is evident, we only show only if part. Noticing that x i, y i K n i implies x i xi and yi yi, we have xit y i = x i yi + xi T y i x i yi + xi T y i 0 for each i, where the last inequality follows from Cauchy-Schwarz inequality. On the other hand, x T y = 0 yields m i= x it y i = 0. Thus x it y i = 0 holds for each i. This proposition naturally leads us to construct a function Φ : R n R n R n satisfying.6 as ϕ x, y Φx, y =. ϕ m x m, y m, where ϕ i : R n i R n i R n i is a function satisfying ϕ i x i, y i = 0 x i K n i, y i K n i, x i T y i = 0.8 for each i =,..., m. Fukushima, Luo and Tseng [8] showed that.8 holds for the natural residual function ϕ NR : R n i R n i R n i defined by ϕ NR x i, y i = x i [x i y i ] Kn i +. This function is a natural extension of the corresponding function for the NCP. Using this function, we define the function Φ NR : R n R n R n by ϕ NR x, y Φ NR x, y =., ϕ NR x m, y m and then, we can construct a merit function for SOCCP. as ΨNRx, y, ζ = Φ x, NR y + F x, y, ζ = m ϕ NR x i, y i + F x, y, ζ..9 i= 4

8 3 Coerciveness of Merit Function with Natural Residual In this section, we focus on the merit function ΨNR defined by.9, and study conditions for ΨNR to be coercive. In the remainder of the paper, we restrict ourselves to the SOCCP in which i F is given by F x, y, ζ = fx y with a function f : R n R n and ii K = K n. Then, we can rewrite SOCCP. as follows: Find x, y R n R n such that x K n, y K n, x T y = 0, y = fx. 3. Note that the assumption i implies F does not involve the additional variable ζ. This assumption may seem rather restrictive. However, under the assumption i, the KKT conditions for SOCP.3 can be written in the form of the SOCCP see Section 6 of this paper. The assumption ii implies Φ NR x, y = ϕ NR x, y. This assumption is only for simplicity of presentation, and the results obtained in this section can be extended to the general K in a straightforward manner. The merit function ΨNR for SOCCP 3. can be constructed as ΨNRx, y = ϕ NR x, y + fx y. 3. Now we investigate the coerciveness of ΨNR defined by 3.. The next proposition says that, to show the coerciveness of ΨNRx, y, it is sufficient to show the coerciveness of ΨNR : R n R defined by Proposition 3. ΨNR is coercive if ΨNR is coercive. ΨNRx := ϕ NR x, fx. 3.3 Proof. Noticing that ξ + η ξ + η for any ξ, η R n, we have ΨNRx, y x [x y] + + fx y x [x fx] + [x fx] + [x y] + + fx y x [x fx] + fx y + fx y = ΨNRx, where the second inequality follows from the triangle inequality and the third inequality follows from the nonexpansiveness of the projection operator. Hence, ΨNR is coercive if ΨNR is coercive. In order to investigate conditions for ΨNR to be coercive, we need the following five lemmas. Lemma 3. a If x y K n, then ϕ NR x, y = y. b If x y K n, then ϕ NR x, y = x. Proof. a From.5 we have ϕ NR x, y = x [x y] + = x x y = y. b From.6 we have ϕ NR x, y = x [x y] + = x 0 = x. Lemma 3. For any x = x, x R R n with x < 0, ϕ NR x, y x

9 Proof. Let κ and κ be the spectral values of x defined by., and let v and v be the spectral vectors of x defined by.. Since κ = x x < 0 from x < 0, we have [x] + = [κ ] + v. Furthermore, since [x] + is the nearest point to x in K n and [x y] + K n, we have x [x y] + x [x] +. Hence, we obtain ϕ NR x, y x [κ ] + v. 3.5 When κ < 0, we have x [κ ] + v = x / x. When κ 0, we have x [κ ] + v = x κ v = κ v = /x + x x x / x, where the last inequality holds from x < 0. In either case, we have x [κ ] + v / x. It then follows from 3.5 that 3.4 holds. The next lemma gives some properties of unbounded sequences {x k } and {y k } with {ϕ NR x k, y k } being bounded. Lemma 3.3 Assume that sequences {x k } R n and {y k } R n satisfy i lim x k = and lim k k yk = ; ii x k y k / K n K n for all k ; iii {ϕ NR x k, y k } is bounded. Then there exist a positive constant M, and an infinite subsequence K {0,,...} such that a M < x k xk < M and M < yk yk < M for all k ; b- { x k } k K, { x k } k K, { y k } k K and { y k } k K are unbounded ; b- x k > 0M, xk > 0M, yk > 0M and yk > 0M for all k K ; c + xk xk k T k x y x k yk x k yk < 0M 9 x k y k < for all k K ; 45M d x k y k 0. < x k < + 0. and 0. < y k < + 0. for all k K. Proof. Let λ k and λ k be the spectral values of x k y k, and let u k and u k be the spectral vectors of x k y k. Then, from assumption ii and.7 we have ϕ NR x k, y k = x k [x k y k ] + = x k λ k uk. Moreover, from the definitions.,. of spectral values and vectors, we have x k λ k uk = x k, xk { x k y k From these equalities, we have + x k y k } x, k y k x k y k. 6

10 = ϕ NR x k, y k { x k + x k x k y k { x k y k y k + x k y k } + x k + x k + = x k + x k x k y k x k x k y k = } x k y k x k x k T x k y k + x k y k + x k y k x k { x k + x k x k y k { x k y k x k y k { x k + x k + y k + y k = x k y k { x k xk x k y k xk x k T y k + x k x k x k T x k y k x k T x k y k } T x k xk y k yk = } + y k yk x k + xk + y k yk x k xk y k { x k + x k y k y k x k xk y k yk yk xk T x k x k y k x k y k y k + + x k y k x k Now, let P k, Q k and R k be defined by P k = { x k xk + y k } yk, Q k = x k xk + y k yk x k xk y k yk, { R k = x k y k x k y k x k xk y k yk Then 3.6 can be written as ϕ NR x k, y k = P k + Q k + R k. y k y k x k y k + x k y k x k y k } x k y k } T } x k y k x k xk y k yk. 3.6 x k y k T x k xk y k } yk. Since { ϕ NR x k, y k } is bounded, there exists a sufficiently large constant M, such that P k + Q k + R k < M for all k > 0. Here we note that P k is nonnegative. By the triangle inequality, Q k is also nonnegative. Furthermore, R k is also nonnegative from Cauchy-Schwarz inequality. Thus, we have P k < M, Q k < M and R k < M for all k. 7

11 { } x k We first show a. P k < M implies that max xk y k, yk < M < M, where the last inequality holds from M >. Therefore, we have M < x k xk < M, M < yk yk < M, that is a. By using a, we show b. From a, we have x k M < xk and xk M < xk. Similarly, we have y k M < yk and yk M < yk. Moreover, since lim k x k = and lim k y k =, there exists an infinite subsequence K {0,,...} such that { x k } k K, { x k } k K, { y k } k K and { y k } k K are unbounded, and that x k > 0M, x k > 0M, yk > 0M and yk > 0M for all k K, that is b. Next we show that c holds for all k K. From Q k < M and the definition of Q k, we have x k xk + y k yk M < x k xk y k yk. Noticing that the left-hand side of this inequality is positive for any k K from b-, we obtain x k + y k M < x k xk y k yk. xk By easy calculation, we have x k y k + x k xk yk yk yk xk Dividing both sides by x k yk xk + xk yk x k yk k T k y x k yk x yk < M < M < 0M 9 < T k x k y < M x k 45M, < M xk + y k yk xk + y k yk > 0 and using a, we have x k xk + y k yk x k x k M + x k y k + y k y k M. M where the third inequality follows from the fact that x k M > 9/0 xk and yk M > 9/0 y k for any k K, and the last inequality follows from b-. Finally, we show that d holds for all k K. a and b- yield the following relations for all k K : x k x k = x k + x k x k < xk + x k + M = + x k + M x k < <

12 Furthermore, a and b- yield the following relations for any k K : Hence, for any k K, x k x k = x k + x k x k > xk + x k M = + x k M x k > + 0. > < x k x k < Similar inequalities hold for y k / y k Before we show the fourth lemma, we recall the concepts of monotonicity and strong monotonicity of vector-valued functions. Definition 3. The function f : R n R n is called a monotone if, for any x, z R n, x z T fx fz 0; b strongly monotone with modulus ε > 0 if, for any x, z R n, x z T fx fz ε x z. It is obvious that a strongly monotone function is monotone. Besides, we note that f ε is strongly monotone if f is monotone, where f ε : R n R n is defined by f ε x = fx + εx for ε > 0. Lemma 3.3 assumed that y k. The next lemma shows that the assumption holds when f is strongly monotone, x k and y k = fx k. Lemma 3.4 Let {x k } be an arbitrary sequence satisfying lim k x k =. If f is strongly monotone with modulus ε > 0, then there exists k > 0 such that fx k > ε xk holds for all k > k. Moreover we have lim k y k = when y k = fx k for all k. Proof. Setting x = x k and z = 0 in Definition 3. b, we have x k T fx k f0 ε x k. 9

13 Then, from Cauchy-Schwarz inequality we have ε x k x k T fx k f0 Moreover, dividing both sides by x k, we have x k fx k f0 x k fx k + f0. fx k ε x k f0. Since x k, the last inequality implies that there exists k such that for all k > k. fx k > ε xk We finally present the fifth lemma that gives two properties of infinite sequences. Since it can be easily shown, we omit the proof. Lemma 3.5 Let N be the set of nonnegative integers and let N,..., N m be subsets of N such that i N i N j = for any i j {,..., m}, ii m N = N i. i= Then the following statements hold. a Suppose that {α k } is an arbitrary real sequence such that lim k α k = +. Then, N i = = lim k, k N i α k = +. b Suppose N i = and {β k } is an arbitrary real sequence such that lim k, k Ni β k = +. Then, lim β k = +. k We now give the following main theorem of this section by using the five lemmas shown above. Theorem 3. Let ΨNR : R n R be defined by 3.3. If f is strongly monotone, then ΨNR is coercive. Proof. to prove Let {x k } be an arbitrary sequence such that lim k x k =. Then our goal is lim ΨNRx k =. 3.7 k 0

14 We note from Lemma 3.4 that lim k fxk =. 3.8 We let x k = x k, xk R Rn and fx k = f x k, f x k R R n. Now we define the index sets N, N, N 3, N 4 and N 5 by { N = k x k fx k K n}, { } N = k x k fx k K n \{0}, { N 3 = k x k fx k / K n K n, x k } 0, { N 4 = k x k fx k / K n K n, x k } > 0, f x k < 0, { N 5 = k x k fx k / K n K n, x k } > 0, f x k 0. Note that 5 i= N i = {0,,...} and that N i N j = for any i j {,, 3, 4, 5}. By setting α k = x k or α k = fx k in Lemma 3.5 a, we obtain the following Property A. Property A. If N i =, then lim x k =, k, k N i lim fx k =. k, k N i By setting β k = ΨNRx k in Lemma 3.5 b, we obtain the following Property B. Property B. If for N i such that N i =, then lim ΨNRx k = k, k N i lim ΨNR x k =. k Property B implies that, for proving 3.7, it is sufficient to show N i = = lim k, k N i ΨNR x k = 3.9 for each i {,, 3, 4, 5}. Let y k := fx k for all k. We note that lim k y k = holds from 3.8. Case. N = : Lemma 3. a implies that ϕ NR x k, y k = y k for all k N. It then follows from Property A that lim ΨNR x k ϕnr = lim x k, y k = k, k N k, k N lim y k =. k, k N

15 Case. N = : Lemma 3. b implies that ϕ NR x k, y k = x k for all k N. It then follows from Property A that lim ΨNR x k ϕnr = lim x k, y k = k, k N k, k N lim k, k N x k =. Case 3. N 3 = : Lemma 3. implies that ϕ NR x k, y k x k for all k N 3. It then follows from Property A that lim ΨNR x k ϕnr = lim x k, y k k, k N 3 k, k N 3 lim x k =. k, k N 3 Case 4. N 4 = : Assuming lim inf k, k N4 ϕ NR x k, y k < +, we will derive contradiction. Under this assumption, there exists an infinite subsequence N 4 N 4 such that {ϕ NR x k, y k } k N 4 is bounded. Note that lim k, k N 4 x k = and lim k, k N 4 y k = from Property A and Lemma 3.5 a. Thus, since {x k } k N 4 and {y k } k N 4 satisfy assumptions i iii of Lemma 3.3, there exist K N 4 and M, satisfying a d in Lemma 3.3. Let us choose k K arbitrarily. From Lemma 3.3 c and x k yk / xk yk =, we have x k T k y > x k yk 45M > 00M 45M > 90M, 3.0 where the second inequality follows from Lemma 3.3 b-, and the third inequality follows from the fact that M > > /9. Since x k y k / K n K n, we have x k y k < xk y k x k y k from the definition. of second-order cone, that is, x k y k x k < y k. We then have x k T k y < x k yk + < x k yk + { x k x k } + { y k { x k } + M k x + = x k yk + M xk + M yk + M = x k M y k M + M y k } { y k + M y k } < < 79M, 3.

16 where the second inequality holds from Lemma 3.3 a, the first equality holds from x k > 0 and y k < 0, and the third inequality holds from Lemma 3.3 b-. Since 3.0 and 3. contradict each other, we obtain lim ΨNR x k = k, k N 4 ϕnr lim x k, y k =. k, k N 4 Case 5. N 5 = : Assuming lim inf k, k N5 ϕ NR x k, y k < +, we will derive contradiction. Under this assumption, there exists an infinite subsequence N 5 N 5 such that {ϕ NR x k, y k } k N 5 is bounded. Note that lim k, k N 5 x k = and lim k, k N 5 y k = from Property A and Lemma 3.5 a. Thus, since {x k } k N 5 and {y k } k N 5 satisfy assumptions i iii of Lemma 3.3, there exist K N 5 and M, satisfying a d in Lemma 3.3. Let us choose k K arbitrarily. From Lemma 3.3 and the fact that x k, y k > 0 for any k K, we have M < x k x k < M, M < yk y k < M, 3. {x k }, { xk }, {yk } and { yk } are unbounded, 3.3 x k > 0M, x k > 0M, yk > 0M and y k > 0M, 3.4 T + xk k y x k yk < 0M { 9 x k } y k, 3.5 x k y k 0. < x k < + 0. and 0. < y k < Now let the spectral values of x k be κ k and κ k, and the spectral vectors be vk and v k. Furthermore, let the spectral values of y k be ν k and ν k, and the spectral vectors be wk and w k. Note that, from 3. and the definition. of spectral values, we have M < κ k < M, M < ν k < M. 3.7 Since f is strongly monotone, there exists ε > 0 such that x z T fx fz ε x z 3.8 for any x, z R n. Let the sequences {ξ k } and {η k } be defined by ξ k := x k κ k wk, 3.9 η k := fξ k. 3.0 Substituting x = x k and z = ξ k in 3.8, we obtain ε κ k wk x k ξ k T y k η k = κ k T wk ν k wk + ν k wk η k = κk νk κ k T wk η k < Mκk + η k κ k, 3. 3

17 where the second equality follows from w k = / and w k T k w = 0, and the last inequality = /. is positive, dividing both sides of follows from κ k > 0, M < ν k < M, the Cauchy-Schwarz inequality and w k Since the left-hand side of 3. equals /εκ k and κ k 3. yields η k > ε κ k M. Since {κ k } = {xk + x k } is unbounded, {ηk } is also unbounded. Next, we derive a contradiction by showing the boundedness of {η k }. To do this, since η k = fξ k, it is sufficient to show the boundedness of {ξ k }. Substituting x k = κ k vk + κ k vk in 3.9, we have ξ k = κ k v k w k + κ k vk. } is bounded. Then, we can show the bounded- Since v{ k = / and 3.7 holds, {κ k ness of v k w k } as follows : κ k κ k v k w k = = = vk x k + x k x k + x k x k + x k {, 0, + < 5 { 9 M x k + M x k < 5. M 9 + = 5. M 9 + < 3M + k x y k x k y k + 0. ε 0. x k x k x k x k + y xk T y k k y k x k yk y k + y k /y k x k /x k, }, yk y k } where the first inequality follows from x k > 0, y k > 0, 3. and 3.5, the second inequality follows from the fact that 3.4 implies M < 0.x k, and the third inequality follows from Lemma 3.4 and 3.6. Since {ξ k } is bounded, {η k } = {fξ k } is also bounded. However, this contradicts the unboundedness of {η k }. Hence, we obtain lim ΨNR x k ϕnr = lim x k, y k =. k, k N 5 k, k N 5 Consequently, we have shown that 3.9 holds for all i {,, 3, 4, 5}. This completes the proof. 4

18 4 Algorithm for Solving SOCCP In Section 3, we have shown that, if f is strongly monotone, then the merit function Ψ NR given by 3. is coercive. Therefore, we can solve the strongly monotone SOCCP 3. by minimizing Ψ NR with an appropriate descent algorithm. However, the merit function Ψ NR is not differentiable in general. Therefore, methods that use the gradient of the objective function, such as the steepest descent method and Newton s method, are not applicable. Furthermore, the assumption for f to be strongly monotone is quite restrictive from a practical standpoint. In order to overcome such shortcomings, we consider a smoothing method and a regularization method. In this section, we propose a globally convergent algorithm for SOCCP, based on these methods. 4. Smoothing and Regularization Methods In this section, we introduce the smoothing and regularization methods. Firstly, we describe the smoothing method. For a nondifferentiable function h : R n R m, we consider a function h µ : R n R m with a parameter µ > 0 which has the following properties : a h µ is differentiable for any µ > 0, b lim µ +0 h µ x = hx for any x R n. Such a function h µ is called a smoothing function of h. Instead of solving the original problem hx = 0, the smoothing method solves subproblems h µ x = 0 for µ > 0, and obtain a solution of the original problem by letting µ +0. Fukushima, Luo and Tseng [8] extended Chen and Mangasarian s class [] of smoothing functions for NCP to SOCCP, which may be regarded as a smoothing function of natural residual ϕ NR. Now, in order to define ϕ µ, we consider the following continuously differentiable convex function ĝ : R R satisfying lim ĝα = 0, lim α ĝα α = 0, 0 < ĝ α <. 4. α For example, ĝ α = α α/ and ĝ α = lne α + satisfy the conditions. Furthermore, by using ĝ, we define g : R n R n by gz = ĝλ u + ĝλ u, 4. where λ and λ are the spectral values of z, and u and u are the spectral vectors of z. Then, for µ > 0, the function ϕ µ is given by ϕ µ x, y = x µgx y/µ. 4.3 Fukushima, Luo and Tseng [8] showed that ϕ µ with µ > 0 is a smoothing function of ϕ NR. For µ = 0, we denote ϕ 0 x, y := ϕ NR x, y for convenience. By using ϕ µ, we define Ψ µ : R n R n R by Ψ µ x, y = ϕ µx, y + fx y. 4.4 Since ϕ µ is differentiable for µ > 0, Ψ µ is also differentiable for µ > 0. Theorem 3., we give a sufficient condition for the coerciveness of Ψ µ. Next, by applying 5

19 Theorem 4. If f is strongly monotone, then, for any µ 0, the function Ψ µ x, y defined by 4.4 is coercive. Proof. From [8, Proposition 5.], for any µ 0, there exists a constant ρ > 0 such that ϕ µ x, y ϕ NR x, y ρµ, x, y R n. It then follows from Proposition 3. and Theorem 3. that Ψ µ is coercive if f is strongly monotone. When f is strongly monotone, this theorem enables us to obtain a solution of SOCCP 3. by using the smoothing method combined with a suitable descent method. However, strong monotonicity of f is quite a severe condition. As a remedy for this inconvenience, we employ the regularization method. In the regularization method, we consider the function f ε : R n R n defined by f ε x := fx + εx. This method solves SOCCP with f ε for each ε > 0 as a subproblem, and obtains a solution of SOCCP by letting ε +0. If f is monotone, then f ε is strongly monotone for any ε > 0. Therefore, the coerciveness of the function Ψ µ,ε x, y = ϕ µx, y + f εx y 4.5 is guaranteed from Theorem 4. and Proposition Stationary Point of Smoothing Function In this section, we show that any stationary point of Ψ µ,ε is a global minimum of Ψ µ,ε. To this end, we first give an explicit representation of Ψ µ,ε under the assumption that f is differentiable. Now, let us define H µ,ε : R n R n R n R n by Then, noticing we have H µ,ε x, y := Ψ µ,ε x, y = H µ,εx, y, ϕµ x, y. 4.6 f ε x y Ψ µ,ε x, y = H µ,ε x, y H µ,ε x, y 4.7 for µ > 0. From the definition of H µ,ε, ϕ µ and f ε, H µ,ε x, y can be written as H µ,ε x, y = = x ϕ µ x, y fx + εi y ϕ µ x, y I I gz fx + εi, gz I 4.8 6

20 where z = x y/µ and g is the function defined by 4.. Moreover, from [8, Proposition 5.], gz is written as follows : ĝ z I if z = 0, c z T gz = b z c z ai + b a z z T if z 4.9 0, z z where a = ĝλ ĝλ λ λ, b = ĝ λ + ĝ λ, c = ĝ λ ĝ λ 4.0 and λ i, i =, are the spectral values of z = x y/µ. It is important to see when H µ,ε x, y is nonsingular since 4.7 implies that, if H µ,ε x, y is nonsingular everywhere, then every stationary point of Ψ µ,ε is a global minimum of Ψ µ,ε. The following proposition gives a sufficient condition for H µ,ε x, y to be nonsingular. Proposition 4. If f : R n R n is monotone, then the Jacobian H µ,ε x, y is nonsingular for any µ > 0, ε 0 and x, y R n R n. Proof. Firstly, we show that O gz I for any z R n. When z = 0, it is clear that O gz I since gz = ĝ z I from 4.9 and 0 < ĝ z < from 4.. So we only consider the case where z 0. Noticing that gz is given by 4.9, in order to show gz O, it is sufficient to show that b is positive and the Schur complement of gz with respect to b is positive definite Since b = /ĝ λ + /ĝ λ, we have b > 0 from 4.. On the other hand, the Schur complement of gz with respect to b is given by { ai + b a z } z T z c z z T b z = a I z z T z + b c z z T b z. If a = ĝλ ĝλ /λ λ 0, then the continuous differentiability of ĝ and the mean value theorem guarantee the existence of τ [λ, λ ] such that ĝ τ 0. Since this fact contradicts 4., we have a > 0. Moreover, we have b c /b = /ĝλ + /ĝλ > 0 from 4.. Furthermore, both z z T / z and I z z T / z are positive semidefinite and their sum is the identity matrix. Hence, any positive combination of z z T / z and I z z T / z is positive definite. Therefore, the Schur complement of gz with respect to b is positive definite, and hence we obtain gz O. In a similar way, we can show b c zt z I gz = c z z ai b a z z T z O, by showing b > 0 and the positive definiteness of the Schur complement of I gz with respect to b. Secondly, we show the nonsingularity of H µ,ε x, y T instead of H µ,ε x, y. Let us denote G := gx y/µ and F := fx for convenience. Then H µ,ε x, y T can be written as H µ,ε x, y T I G G =. F + εi I 7

21 Let ξ, η R n satisfy H µ,ε x, y T ξ η = 0, that is, I Gξ + Gη = 0, 4. F + εiξ η = Multiplying the left-hand side of 4. by G and combining with 4., we have G I + F + εiξ = 0. Since O G I implies G I and monotonicity of f implies F O, G I + F + εi is positive definite. So we have ξ = 0, and then η = 0 from 4.. Hence, H µ,ε x, y T is nonsingular, that is, H µ,ε x, y is nonsingular. Finally, by using the result of Proposition 4., we give the main result of this section. Proposition 4. If f : R n R n is monotone, then, for any µ > 0 and ε 0, every stationary point x, y of the function Ψ µ,ε defined by 4.5 satisfies Ψ µ,ε x, y = 0. Proof. Note that Ψ µ,ε x, y = H µ,ε x, y H µ,ε x, y = 0. By Proposition 4., H µ,ε x, y is nonsingular. Hence, we have H µ,ε x, y = 0, that is, Ψ µ,ε x, y = / H µ,ε x, y = Globally Convergent Algorithm In Section 4., we showed that, for µ > 0 and ε > 0, the function Ψ µ,ε defined by 4.5 is coercive if f is monotone. Hence, by applying an appropriate descent method, we can obtain a minimum xµ, ε, yµ, ε of the function Ψ µ,ε. Moreover, letting µ, ε converge to 0, 0, we may expect that xµ, ε, yµ, ε converges to a solution of SOCCP. However, in practice, it is usually impossible to find an exact minimum of Ψ µ,ε. So, we consider the following algorithm in which the function Ψ µ,ε is minimized only approximately at each iteration. Algorithm 4. Let {ε k }, {µ k } and {α k } be sequences of positive numbers converging to 0. Step 0 Choose x 0, y 0 R n R n and set k := 0. Step Terminate the iteration if an appropriate stopping criterion is satisfied. Step Find a pair x k+, y k+ such that Set k := k +, and go back to Step. Ψ µk+, ε k+ x k+, y k+ α k+. To obtain x k+, y k+ in Step, we can use any unconstrained minimization technique such as the steepest descent method and Newton s method. We note that this algorithm is welldefined by the following reasons. Since Ψ µ,ε is differentiable and coercive from Theorem 4., it has a stationary point. Moreover, Proposition 4. implies that the value of Ψ µ,ε at the stationary point is 0. Hence, there exists x k+, y k+ satisfying Step for any α k+ > 0. 8

22 We next show the global convergent property of Algorithm 4., by extending the result of [6] for NCP to SOCCP. To this end, we give two lemmas. The following lemma implies that Ψ µ,ε is uniformly continuous on a compact set not only in x and y but also in µ and ε Lemma 4. Let C R n R n be a compact set. Then, for any δ > 0, there exists ε > 0 and µ > 0 such that for any x, y C, ε [ 0, ε ] and µ [ 0, µ ]. Ψ µ,ε x, y ΨNR x, y δ Proof. Define the function Ω : R n R n R R R by Ωx, y, µ, ε := Ψ µ,ε x, y. Then, Ω is continuous and satisfies Ωx, y, 0, 0 = Ψ NR x, y. Since any continuous function is uniformly continuous on a compact set, Ω is uniformly continuous on C [ 0, ε ] [ 0, µ ]. The next lemma is known as the mountain pass theorem, which is useful for our analysis. For more detail, see Theorem 9..7 in []. Lemma 4. Mountain Pass Theorem Let θ : R n R be a continuously differentiable and coercive function. Let C R n be a nonempty and compact set and let m be the minimum value of θ on the boundary of C, that is, m := min x C θx. Assume moreover that there exist points a C and b / C such that θa < m and θb < m. Then, there exists a point c R n such that θc = 0 and θc m. Finally, by using the above lemmas. we establish the global convergence property of Algorithm 4.. Theorem 4. Let f : R n R n be a monotone function, and assume that the solution set S of SOCCP 3. is nonempty and bounded. Let {x k, y k } be a sequence generated by Algorithm 4.. Then, {x k, y k } is bounded, and every accumulation point is a solution of 3.. Proof. From a simple continuity argument, we can easily show that every accumulation point of {x k, y k } is a solution of SOCCP 3.. So we only show the boundedness of {x k, y k }. For a contradiction purpose, we assume that {x k, y k } is not bounded. Then, there exists a subsequence {x k, y k } k K such that lim k, k K x k, y k =. From the boundedness of S, there exists a compact set C R n R n such that S int C and for all k K sufficiently large. Moreover, we have m := x k, y k / C 4.3 min ΨNRx, y > 0, 4.4 x,y C since any x, y C does not belong to S and C is compact. Now, applying Lemma 4. with δ := m/4 > 0, we have Ψ µk, ε k x, y ΨNRx, y 4 m 4.5 9

23 and Ψ µk, ε k x, y ΨNR x, y 4 m 4.6 for any x, y C and k K sufficiently large. Let x, y S C be a solution of SOCCP. Then, from 4.5, we have Ψ µk, ε k x, y ΨNRx, y = Ψ µk, ε k x, y 4 m 4.7 for all k K sufficiently large. On the other hand, letting x k, ỹ k be a solution of min x,y C Ψ µk, ε k x, y, we have, for all k K sufficiently large, min Ψ µ k, ε k x, y = Ψ µk, ε k x k, ỹ k x,y C 4 m + ΨNR xk, ỹ k 4 m + m = 3 m, where the first inequality follows from 4.6 and the second inequality follows from 4.4 and x k, ỹ k C. Furthermore, since Ψ µk, ε k x k, y k α k from Step of Algorithm 4., we have Ψ µk, ε k x k, y k 4 m 4.9 for all k K sufficiently large. Now, let k K be a sufficiently large integer satisfying 4.3, 4.7, 4.8 and 4.9. Then, applying Lemma 4. to Ψ µ k, with a := x, y, b := ε k x k, y k, m := 3/4m, we obtain the existence of ˆx k, ŷ k R n R n such that Ψ µ k, ε kˆx k, ŷ k = 0 and Ψ µ k, ε kˆx k, ŷ k 3 4 m > 0. However, this result contradicts Proposition 4.. Hence, {x k, y k } is bounded. 5 Numerical Experiments In this section we present some numerical results for Algorithm Newton s Method To obtain the next iterate in Step of Algorithm 4., we use Newton s method with Armijo s step size rule [, Sec...]. The specific algorithm is described as follows. In the algorithm, we denote w k := x k y k for convenience. 0

24 Algorithm 5. Choose ρ, β, η 0,, σ 0, β and ν 0,. Step 0 Choose w 0 R n, µ 0 0, and ε 0 0,. Set α 0 := η Ψ µ0,ε 0 w 0 and k := 0. Step If ΨNRw k = 0, terminate the iteration. Step Step.0 Set v 0 := w k and j := 0. Step. Find the solution ˆd j of the system of equations H µk,ε k v j + H µk,ε k v j T ˆdj = 0. Step. If Ψ µk, ε k v j + ˆd j α k, let w k+ := v j + ˆd j and go to Step 3. Otherwise go to Step.3. Step.3 Let m j be the smallest nonnegative integer such that Ψ µk, ε k v j Ψ µk, ε k v j + ρ m j ˆdj σρ m j Ψ µk, ε k v j T ˆdj. Let v j+ := v j + ρ m j ˆdj. Step.4 If Ψ µk, ε k v j+ α k, set w k+ := v j+, and go to Step 3. Otherwise, set j := j +, and go back to Step.. Step 3 Let α k+ := η Ψ µk, ε k w k+, µ k+ := min ν α k+, µ k and ε k+ := µ k+. Set k := k +, and go back to Step. Note that {α k } converges to 0 since α k+ = η Ψ µk, ε k w k ηα k and η 0,. The sequences {µ k } and {ε k } also converge to 0 since µ k+ µ k / and ε k+ ε k /. Hence, this algorithm has global convergence from Theorem Results of Experiments In order to evaluate the efficiency of Algorithm 5., we have conducted some numerical experiments. In our experiments, we chose ĝα = α + 4+α/, β = 0.5, ρ = 0.5, η = 0.99, σ = 0.5 and ν = Moreover, we let µ 0 := w 0 and ε 0 := w 0. We employed ΨNRw k <e-0 as the termination criterion. Our algorithm was coded in MATLAB 5.3. and run on a SUN ULTRA 60.

25 We solved three SOCCPs by Algorithm 5.. constraints K of the problems are given by : Problem n = 3 : fx := Problem n = 3 : fx := Problem 3 n = 5 : fx := K = K 3 K x x x The functions f and the second-order cone x x x 3 +, K = K 3, 3 7, K = K 3, 4x x 3 + expx x 3 4x 4 + x 5 x x x + 5x 3 / + 3x + 5x 3 6x 4 7x 5 expx x x + 5x 3 / + 3x + 5x 3 3x 4 + 5x 5 4x + 6x + 3x 3 x + 7x 5x 3 + We note that these three functions are monotone but not strongly monotone. In Problem, f : R 3 R 3 is an affine function whose coefficient matrix is symmetric and positive semidefinite but not positive definite. In Problem, f : R 3 R 3 is a strictly monotone function comprised of cubic and constant terms only. Noticing that its Jacobian is given by fx = diag0.x, 0.x, 0.09x 3, it can be easily seen that f is monotone but not strongly monotone. In Problem 3, f : R 5 R 5 is the function which appears in the KKT conditions for the SOCP : Minimize expz z 3 + 3z z z + 5z 3 z z subject to z K , z K. z 3 z 3 From the convexity of the objective function, it can be easily shown that f is monotone. Table shows the behavior of Algorithm 5. starting from the initial point x 0 := 00,..., 00 T and y 0 := 00,..., 00 T. The table contains the following items for each k : The number of inner iterations j, the smoothing parameter µ k, the regularization parameter ε k, the inexact minimization parameter α k, the functional value Ψ µk, ε k w k, and the value of the merit function ΨNRw k. We can see from Table that the generated sequences converge very quickly. Especially near the solution, the behavior exhibits superlinear convergence. Next, in Table, we give the results for Algorithm 5. from several different starting points randomly chosen from a sphere centered at the origin. For Problems, and 3, their radius of the sphere is set to be e+0, e+3, and 50, respectively. Table contains the following items : The norm of the starting point x 0, y 0, the total number of the iterations iter., and the final value of the merit function ΨNR. From Table, we confirm that our algorithm has global convergence. Moreover, we see that the choice of an initial point does not influence the number of iterations substantially.,

26 Table : Convergent Behavior Problem k j µ k= ε k α k Ψ µk, ε k w k ΨNRw k Problem e+0.035e e+09.03e e e e e+04.56e e e e e e e e e e e-0.44e e-0 3.5e-03.69e-03.88e e-04.36e e e e e-.67e e e-0.844e e e e-4.694e-7.57e-0.57e e-6.56e e-9.890e-9 Problem e e e e+09.5e e e e e e+07.60e e e+0.64e e+05.e e e+04.35e e e e+0.36e e e e+00.e e e-0.343e e e e e e-0.33e e e e-0.637e e-03.5e e e-0.598e e e e e e-.59e e e-07.9e-.667e-.04e e e-7.5e-.5e e e e e-6 Problem e e e e+4.58e+0.3e+4.3e+4.30e e+0.85e+3.84e+3.84e e+0.069e+.068e+.067e e e e e e e e e e+00 8.e+08 8.e e e e e e e e e e e e e e e e e e e-0.670e e e e e e e e e e e e e e e e-03.6e+00.63e+00.66e e-03.70e e-03.60e e-03.3e-03.0e-03.9e e e-03.84e-03.86e e e e-03.49e e-04.7e-03.5e-03.3e e-04.03e e-03.09e e e e e e e-.85e-09.9e e e e-3 6.4e e-3.476e-4.07e-8.08e e e-3.05e-5.07e-5 3

27 Table : Global Convergence Problem x 0, y 0 iter. ΨNR Problem.45e e e e-3.86e e-5 4.3e e e e-5.309e e-5.53e+0 4.3e e+0.098e-6.79e e-7.09e e e e-.003e+05.8e-.66e e e e e e e e e+08.58e e e-9 5.4e e e e-8 Problem.e e e e e-0 0.7e-5 6.7e-0.84e e e-.445e e-.3e e e e e e- 7.85e e-8.54e e-9.90e e- 4.6e e-4 5.7e e e e- 5.8e e- 6.3e e e e e e e e-5 Problem e e e e e e e e e e e e e e e e e e e e-30 4

28 6 Final Remarks In this paper, we have shown that the merit function ΨNR defined by 3. for SOCCP 3. is coercive if f is strongly monotone, and that the smoothing function Ψ µ defined by 4.4 is also coercive under the same condition. Moreover, based on the idea of the smoothing method and the regularization method, we have proposed a globally convergent algorithm for solving monotone SOCCP 3.. In Section 3 and 4, the function F in SOCCP. is assumed to be of the form by F x, y, ζ = fx y. This assumption may seem rather restrictive. However, the KKT conditions for SOCP.3 can be written as the SOCCP with F x, y, ζ = fx y as follows : In SOCP.3, let z = z z with z R s + and z R s +, and denote ẑ := z z R s +, where R n + is the n- dimensional nonnegative orthant. Moreover, define ˆθ : R s R by ˆθẑ = θz z, and ˆγ : R s R t by ˆγẑ = γz z. Then the SOCP.3 can be reformulated as Minimize subject to ˆθẑ ˆγẑ ẑ K R s +, 6. and the KKT conditions for 6. are written as ˆλ ˆθẑ ˆγẑ I = 0, ˆλ ˆλ ˆγẑ K R ˆλ s +, K R s +, ẑ T ˆλ ˆγẑ ˆλ ẑ = Now, let ˆµ = ˆγẑ, and notice that Proposition. holds. Then, 6. can be rewritten as ˆγẑ ˆµ =, ˆθẑ ˆγẑˆλ ˆλ T ˆλ ˆµ ˆλ ˆµ K R s +, K R ẑ ˆλ s +, = ẑ ˆλ Setting x = ˆλ, y = ẑ ˆµ ˆγẑ, fx =, 6.4 ˆλ ˆθẑ ˆγẑˆλ the KKT conditions 6.3 for SOCP.3 can be reduced to the SOCCP with F x, y, ζ = fx y. Note that the KKT conditions 6.3 for SOCP.3 contain more variables than the original KKT conditions.4. Furthermore, some desirable properties of the functions involved in SOCP.3 may be lost. For example, even if γ in SOCP.3 is strictly convex, ˆγ in 6.3 is convex but not strictly convex in general. Hence, it will be useful to develop a method that can directly deal with the KKT conditions.4, or more generally, SOCCP involving the function F x, y, ζ which is not restricted to be of the form F x, y, ζ = fx y. In Sections 3 and 4, it is also assumed that K = K n. For the general case where K = K n K nm, it can be shown that the merit function ΨNR defined by.9 with F x, y, ζ = fx y is coercive when f has the following property. 5

29 Property For K = K n K nm, let x = x,..., x m R n R nm, z = z,..., z m R n R nm. Moreover, let f : R n R n R nm be represented as fx = f x,..., f m x with f i : R n R n i, i =,..., m. Then, there exists ε > 0 such that holds for any x, z R n. { } max x i z i T f i x f i z i ε x z Note that strongly monotone functions have Property. When n = = n m =, a function satisfying Property is reduced to a uniform P function. As a future research issue, it is interesting to see whether the condition for coerciveness of the merit function ΨNR can be weakened. In the case of NCP, it has been shown that some merit functions are coercive under a condition weaker than the uniform P property [6]. Acknowledgments First of all, I would like to express my sincere thanks and appreciation to Professor Masao Fukushima and Assistant Professor Nobuo Yamashita. Although I often troubled them due to my greenness, they kindly looked after me and gave me plenty of precise advice. Without their help and advice, I could never write up this paper. I would also like to tender my acknowledgments to Associate Professor Tetsuya Takine. He gave me a lot of comments at research conference beyond the difference of the research field. Finally, I would like to thank all the members in Fukushima s Laboratory. References [] D. P. Bertsekas, Nonlinear Programming, Athena Scientifics, Belmont, Massachusetts, 995. [] C. Chen and O. L. Mangasarian, A class of smoothing functions for nonlinear and mixed complementarity problems, Computational Optimization and Applications, Vol. 5, 996, pp [3] X. Chen, Smoothing methods for complementarity problems and their applications: A survey, Journal of the Operations Research Society of Japan, Vol. 43, 000, pp [4] X. Chen and L. Qi, Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities, Mathematics of Computation, Vol. 67, 998, pp [5] A. L. Dontchev and T. Zolezzi, Well-posed optimization problems, Lecture Notes in Mathematics 543, Springer-Verlag, New York, 993. [6] F. Facchinei and C. Kanzow, Beyond monotonicity in regularization methods for nonlinear complementarity problems, SIAM Journal on Control and Optimization, Vol. 37, 999, pp [7] J. Faraut and A. Korányi, Analysis on Symmetric Cones, Oxford Mathematical Monographs, Oxford University Press, New York,

30 [8] M. Fukushima, Z.-Q. Luo and P. Tseng, Smoothing functions for second-order cone complementarity problems, SIAM Journal on Optimization, Vol., 00, pp [9] C. Kanzow and H. Pieper, Jacobian smoothing methods for general complementarity problems, SIAM Journal on Optimization, Vol. 9, 999, pp [0] M. S. Lobo, L. Vandenberghe, S. Boyd and H. Lebret, Applications of second-order cone programming, Linear Algebra and Its Applications, Vol. 84, 998, pp [] R. S. Palais and C.-L. Terng, Critical Point Theory and Submanifold Geometry, Lecture Notes in Mathematics 353, Springer-Verlag, Berlin, 988. [] M. Patriksson, Merit functions and descent algorithms for a class of variational inequality problems, Optimization, Vol. 4, 997, pp [3] L. Qi and X. Chen, A globally convergent successive approximation method for severely nonsmooth equations, SIAM Journal on Control and Optimization, Vol. 33, 995, pp [4] L. Qi and D. Sun, Smoothing functions and a smoothing Newton method for complementarity problems and variational inequality problems, School of Mathematics, University of New South Wales,

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Manual of ReSNA matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Shunsuke Hayashi September 4, 2013 1 Introduction ReSNA (Regularized

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Error bounds for symmetric cone complementarity problems

Error bounds for symmetric cone complementarity problems to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin

More information

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this

More information

2 ReSNA. Matlab. ReSNA. Matlab. Wikipedia [7] ReSNA. (Regularized Smoothing Newton. ReSNA [8] Karush Kuhn Tucker (KKT) 2 [9]

2 ReSNA. Matlab. ReSNA. Matlab. Wikipedia [7] ReSNA. (Regularized Smoothing Newton. ReSNA [8] Karush Kuhn Tucker (KKT) 2 [9] c ReSNA 2 2 2 2 ReSNA (Regularized Smoothing Newton Algorithm) ReSNA 2 2 ReSNA 2 ReSNA ReSNA 2 2 Matlab 1. ReSNA [1] 2 1 Matlab Matlab Wikipedia [7] ReSNA (Regularized Smoothing Newton Algorithm) ReSNA

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

GENERALIZED second-order cone complementarity

GENERALIZED second-order cone complementarity Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the

More information

A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints

A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints Hiroshi Yamamura, Taayui Ouno, Shunsue Hayashi and Masao Fuushima June 14, 212 Abstract In this

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem Math. Program., Ser. B 104, 93 37 005 Digital Object Identifier DOI 10.1007/s10107-005-0617-0 Jein-Shan Chen Paul Tseng An unconstrained smooth minimization reformulation of the second-order cone complementarity

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

A continuation method for nonlinear complementarity problems over symmetric cone

A continuation method for nonlinear complementarity problems over symmetric cone A continuation method for nonlinear complementarity problems over symmetric cone CHEK BENG CHUA AND PENG YI Abstract. In this paper, we introduce a new P -type condition for nonlinear functions defined

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

A First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems

A First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems A First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems Amir Beck and Shoham Sabach July 6, 2011 Abstract We consider a general class of convex optimization problems

More information

Two unconstrained optimization approaches for the Euclidean κ-centrum location problem

Two unconstrained optimization approaches for the Euclidean κ-centrum location problem Noname manuscript No (will be inserted by the editor) Shaohua Pan and Jein-Shan Chen Two unconstrained optimization approaches for the Euclidean κ-centrum location problem Abstract Consider the single-facility

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Lingchen Kong, Levent Tunçel, and Naihua Xiu 3 (November 6, 7; Revised December, 7) Abstract Recently, Gowda et al. [] established

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2 A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite

More information

Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method

Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method Bruno F. Lourenço Ellen H. Fukuda Masao Fukushima September 9, 017 Abstract In this work we are interested

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

A globally convergent Levenberg Marquardt method for equality-constrained optimization

A globally convergent Levenberg Marquardt method for equality-constrained optimization Computational Optimization and Applications manuscript No. (will be inserted by the editor) A globally convergent Levenberg Marquardt method for equality-constrained optimization A. F. Izmailov M. V. Solodov

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Lecture 7 Monotonicity. September 21, 2008

Lecture 7 Monotonicity. September 21, 2008 Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Robust Nash Equilibria and Second-Order Cone Complementarity Problems

Robust Nash Equilibria and Second-Order Cone Complementarity Problems Robust Nash Equilibria and Second-Order Cone Complementarit Problems Shunsuke Haashi, Nobuo Yamashita, Masao Fukushima Department of Applied Mathematics and Phsics, Graduate School of Informatics, Koto

More information

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005 Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION Mikhail Solodov September 12, 2005 ABSTRACT We consider the problem of minimizing a smooth

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

A Regularization Newton Method for Solving Nonlinear Complementarity Problems

A Regularization Newton Method for Solving Nonlinear Complementarity Problems Appl Math Optim 40:315 339 (1999) 1999 Springer-Verlag New York Inc. A Regularization Newton Method for Solving Nonlinear Complementarity Problems D. Sun School of Mathematics, University of New South

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

A smoothing augmented Lagrangian method for solving simple bilevel programs

A smoothing augmented Lagrangian method for solving simple bilevel programs A smoothing augmented Lagrangian method for solving simple bilevel programs Mengwei Xu and Jane J. Ye Dedicated to Masao Fukushima in honor of his 65th birthday Abstract. In this paper, we design a numerical

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Radius Theorems for Monotone Mappings

Radius Theorems for Monotone Mappings Radius Theorems for Monotone Mappings A. L. Dontchev, A. Eberhard and R. T. Rockafellar Abstract. For a Hilbert space X and a mapping F : X X (potentially set-valued) that is maximal monotone locally around

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping

Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping Noname manuscript No. will be inserted by the editor) Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping Hideaki Iiduka Received: date / Accepted: date Abstract

More information

School of Mathematics, the University of New South Wales, Sydney 2052, Australia

School of Mathematics, the University of New South Wales, Sydney 2052, Australia Computational Optimization and Applications, 13, 201 220 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On NCP-Functions * DEFENG SUN LIQUN QI School of Mathematics,

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 9, No. 3, pp. 729 754 c 1999 Society for Industrial and Applied Mathematics A POTENTIAL REDUCTION NEWTON METHOD FOR CONSTRAINED EQUATIONS RENATO D. C. MONTEIRO AND JONG-SHI PANG Abstract.

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

Sparse Approximation via Penalty Decomposition Methods

Sparse Approximation via Penalty Decomposition Methods Sparse Approximation via Penalty Decomposition Methods Zhaosong Lu Yong Zhang February 19, 2012 Abstract In this paper we consider sparse approximation problems, that is, general l 0 minimization problems

More information

Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata

Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata Zheng-Jian Bai Revised: October 18, 2007 Abstract In this paper we concern the inverse problem of constructing the n-by-n

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1

Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1 Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1 Xiaojun Chen and Masao Fukushima 3 January 13, 004; Revised November 5, 004 Abstract. This paper presents a new formulation

More information

A Power Penalty Method for a Bounded Nonlinear Complementarity Problem

A Power Penalty Method for a Bounded Nonlinear Complementarity Problem A Power Penalty Method for a Bounded Nonlinear Complementarity Problem Song Wang 1 and Xiaoqi Yang 2 1 Department of Mathematics & Statistics Curtin University, GPO Box U1987, Perth WA 6845, Australia

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces

An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces J.Y. Bello-Cruz G. Bouza Allende November, 018 Abstract The variable order structures

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information