Unified smoothing functions for absolute value equation associated with second-order cone

Size: px
Start display at page:

Download "Unified smoothing functions for absolute value equation associated with second-order cone"

Transcription

1 Applied Numerical Mathematics, vol. 135, January, pp. 06-7, 019 Unified smoothing functions for absolute value equation associated with second-order cone Chieu Thanh Nguyen 1 Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan. B. Saheya College of Mathematical Science Inner Mongolia Normal University Hohhot 0100, Inner Mongolia, P. R. China Yu-Lin Chang 3 Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan. Jein-Shan Chen 4 Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan. February 19, 018 (revised on July 1, 018) Abstract In this paper, we eplore a unified way to construct smoothing functions for solving the absolute value equation associated with second-order cone (SOCAVE). Numerical comparisons are presented, which illustrate what kinds of smoothing functions 1 thanhchieu90@gmail.com. saheya@imnu.edu.cn. The author s work is supported by Natural Science Foundation of Inner Mongolia (Award Number: 017MS015). 3 ylchang@math.ntnu.edu.tw. 4 Corresponding author. jschen@math.ntnu.edu.tw. The author s work is supported by Ministry of Science and Technology, Taiwan. 1

2 work well along with the smoothing Newton algorithm. In particular, the numerical eperiments show that the well known loss function widely used in engineering community is the worst one among the constructed smoothing functions, which indicates that the other proposed smoothing functions can be employed for solving engineering problems. Keywords. Second-order cone, absolute value equations, smoothing Newton algorithm. 1 Introduction Recently, the paper 36 investigates a family of smoothing functions along with a smoothingtype algorithm to tackle the absolute value equation associated with second-order cone (SOCAVE) and shows the efficiency of such approach. Motivated by this article, we continue to ask two natural questions. (i) Whether there are other suitable smoothing functions that can be employed for solving the SOCAVE? (ii) Is there a unified way to construct smoothing functions for solving the SOCAVE? In this paper, we provide affirmative answers for these two queries. In order to smoothly convey the story of how we figure out the answers, we begin with recalling where the SOCAVE comes from. The standard absolute value equation (AVE) is in the form of A + B = b, (1) where A IR n n, B IR n n, B 0, and b IR n. Here means the componentwise absolute value of vector IR n. When B = I, where I is the identity matri, the AVE (1) reduces to the special form: A = b. It is known that the AVE (1) was first introduced by Rohn in 41, but was termed by Mangasarian 34. During the past decade, there has been many researchers paying attention to this equation, for eample, Caccetta, Qu and Zhou, Hu and Huang 1, Jiang and Zhang 0, Ketabchi and Moosaei 1, Mangasarian 6, 7, 8, 9, 30, 31, 3, 33, Mangasarian and Meyer 34, Prokopyev 37, and Rohn 43. We elaborate more about the developments of the AVE. Mangasarian and Meyer 34 show that the AVE (1) is equivalent to the bilinear program, the generalized LCP (linear complementarity problem), and to the standard LCP provided 1 is not an eigenvalue of A. With these equivalent reformulations, they also show that the AVE (1) is NP-hard in its general form and provide eistence results. Prokopyev 37 further improves the above equivalence which indicates that the AVE (1) can be equivalently recast as LCP without any assumption on A and B, and also provides a relationship with mied integer programming. In general, if solvable, the AVE (1) can have either unique solution or

3 multiple (e.g., eponentially many) solutions. Indeed, various sufficiency conditions on solvability and non-solvability of the AVE (1) with unique and multiple solutions are discussed in 34, 37, 4. Some variants of the AVE, like the absolute value equation associated with second-order cone and the absolute value programs, are investigated in 14 and 46, respectively. Recently, another type of absolute value equation, a natural etension of the standard AVE (1), is considered 14, 35, 36. More specifically the following absolute value equation associated with second-order cones, abbreviated as SOCAVE, is studied: A + B = b, () where A, B IR n n and b IR n are the same as those in (1); denotes the absolute value of coming from the square root of the Jordan product of and. What is the difference between the standard AVE (1) and the SOCAVE ()? Their mathematical formats look the same. In fact, the main difference is that in the standard AVE (1) means the componentwise i of each i IR, i.e., = ( 1,,, n ) T IR n ; however, in the SOCAVE () denotes the vector satisfying := associated with second-order cone under Jordan product. To understand its meaning, we need to introduce the definition of second-order cone (SOC). The second-order cone in IR n (n 1), also called the Lorentz cone, is defined as K n := { ( 1, ) IR IR n 1 1 }, where denotes the Euclidean norm. If n = 1, then K n is the set of nonnegative reals IR +. In general, a general second-order cone K could be the Cartesian product of SOCs, i.e., K := K n 1 K nr. For simplicity, we focus on the single SOC K n because all the analysis can be carried over to the setting of Cartesian product. The SOC is a special case of symmetric cones and can be analyzed under Jordan product, see 8. In particular, for any two vectors = ( 1, ) IR IR n 1 and y = (y 1, y ) IR IR n 1, the Jordan product of and y associated with K n is defined as y := T y. y y The Jordan product, unlike scalar or matri multiplication, is not associative, which is a main source of complication in the analysis of optimization problems involved SOC, see 4, 6, 9 and references therein for more details. The identity element under this Jordan product is e = (1, 0,..., 0) T IR n. With these definitions, means the Jordan product of with itself, i.e., := ; and with K n denotes the unique vector such that =. In other words, the vector in the SOCAVE () is computed by :=. 3

4 As remarked in the literature, the significance of the AVE (1) arises from the fact that the AVE is capable of formulating many optimization problems such as linear programs, quadratic programs, bimatri games, and so on. Likewise, the SOCAVE () plays a similar role in various optimization problems involving second-order cones. There has been many numerical methods proposed for solving the standard AVE (1) and the SOCAVE (). Please refer to 36 for a quick review. Basically, we follow the smoothing Newton algorithm employed in 36 to deal with the SOCAVE (). This kind of algorithm has been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems 11, 3, 4, the system of inequalities under the order induced by symmetric cone 18, 5, 47, and so on. It is also employed for the standard AVE (1) in 19, 44. The new upshot of this paper lies on discovering more suitable smoothing functions and eploring a unified way to construct smoothing functions. Of course, the numerical performance among different smoothing functions are compared. These are totally new to the literature and are the main contribution of this paper. To close this section, we recall some basic concepts and background materials regarding the second-order cone, which will be used in the subsequent analysis. More details can be found in 4, 6, 8, 9, 14. First, we recall the epression of the spectral decomposition of with respect to SOC. For = ( 1, ) IR IR n 1, the spectral decomposition of with respect to SOC is given by = λ 1 ()u (1) + λ ()u (), (3) where λ i () = 1 + ( 1) i for i = 1, and u (i) = ( 1 1 1, ( 1) i T T ) if 0, ( 1, ( 1) i ω ) T T if = 0, (4) with ω IR n 1 being any vector satisfying ω = 1. The two scalars λ 1 () and λ () are called spectral values of ; while the two vectors u (1) and u () are called the spectral vectors of. Moreover, it is obvious that the spectral decomposition of IR n is unique if 0. It is known that the spectral values and spectral vectors posses the following properties: (i) u (1) u () = 0 and u (i) u (i) = u (i) for i = 1, ; (ii) u (1) = u () = 1 and = 1 (λ 1() + λ ()). Net is the concept about the projection onto second-order cone. Let + denote the projection of onto K n, and be the projection of onto the dual cone (K n ) of K n, where the dual cone (K n ) is defined by (K n ) := {y IR n, y 0, K n }. In fact, the dual cone of K n is itself, i.e., (K n ) = K n. Due to the special structure of 4

5 K n, the eplicit formula of projection of = ( 1, ) IR IR n 1 onto K n is obtained in 4, 6, 8, 9, 10 as below: if K n, + = 0 if K n, u otherwise, where u = 1 + ( ) 1 +. Similarly, the epression of can be written out as 0 if K n, = if K n, where w = w otherwise, It is easy to verify that = + + and 1 ( ) 1. + = (λ 1 ()) + u (1) + (λ ()) + u () = ( λ 1 ()) + u (1) + ( λ ()) + u (), where (α) + = ma{0, α} for α IR. As for the epression of associated with SOC. There is an alternative way via the so-called SOC-function to obtain the epression of, which can be found in 3, 5. In any case, it comes out that = (λ 1 ()) + + ( λ 1 ()) + u (1) + (λ ()) + + ( λ ()) + u () = λ1 () u (1) + λ () u (). Unified smoothing functions for SOCAVE As mentioned in Section 1, we employ the smoothing Newton method for solving the SOCAVE (), which needs a smoothing function to work with. Indeed, a family of smoothing functions was already considered in 36. In this section, we look into what kinds of smoothing functions can be employed to work with the smoothing Newton algorithm for solving the SOCAVE (). Definition.1. A function φ : IR ++ IR IR is called a smoothing function of t if it satisfies the following: (i) φ is continuously differentiable at (, t) IR ++ IR; (ii) lim 0 φ(, t) = t for any t IR. Given a smoothing function φ, we further define a vector-valued function Φ : IR ++ IR n IR n as Φ(, ) = φ (, λ 1 ()) u (1) + φ (, λ ()) u () (5) 5

6 where IR ++ is a parameter, λ 1 (), λ () are the spectral values of, and u (1), u () are the spectral vectors of. Consequently, Φ is also smooth on IR ++ IR n. Moreover, it is easy to verify that lim Φ(, ) = λ 1() u (1) λ () u () = which means each function Φ(, ) serves as a smoothing function of associated with SOC. With this observation, for the SOCAVE (), we further define the function H(, ) : IR ++ IR n IR IR n by H(, ) = A + BΦ(, ) b, IR ++ and IR n. (6) Proposition.1. Suppose that = ( 1, ) IR IR n 1 has the spectral decomposition as in (3)-(4). Let H : IR ++ IR n IR n be defined as in (6). Then, (a) H(, ) = 0 if and only if solves the SOCAVE (); (b) H is continuously differentiable at (, ) IR ++ IR n with the Jacobian matri given by H 1 0 (, ) = B Φ(,) A + B Φ(,) (7) where Φ(, ) Φ(, ) = φ(, λ 1()) φ(, 1 ) = u (1) + φ(, λ ()) u (), I if = 0, b c T c ai + (b a) T if 0, with a = φ(, λ ()) φ(, λ 1 ()), λ () λ 1 () b = 1 ( φ(, λ ()) c = 1 ( φ(, λ ()) + φ(, λ 1()) φ(, λ 1()) ), (8) ). Proof. (a) First, we observe that H(, ) = 0 = 0 and A + BΦ(, ) b = 0 A + B b = 0 and = 0. 6

7 This indicates that is a solution to the SOCAVE () if and only if (, ) is a solution to H(, ) = 0. (b) Since Φ(, ) is continuously differentiable on IR ++ IR n, it is clear that H(, ) is continuously differentiable on IR ++ IR n. Thus, it remains to compute the Jacobian matri of H(, ). Note that Φ(, ) = φ(, λ 1 ())u (1) + φ(, λ ())u (), 1 φ(, λ 1 ()) + φ(, λ ()) φ(, λ = 1 ()) T + φ(, λ if ()) T 0, 1 φ(, λ 1 ()) + φ(, λ ()) φ(, λ 1 ())ω T + φ(, λ ())ω T if = 0. φ(, λ 1 ()) + φ(, λ () ( φ(, λ 1 ()) + φ(, λ ())). if 0, = 1 ( φ(, λ 1 ()) + φ(, λ ())) n φ(, λ 1 ()) + φ(, λ ()) if = where := (,, n ) IR n 1, ω = (ω,, ω n ) IR n 1. From chain rule, it is trivial that Φ(, ) = φ(, λ 1()) u (1) + φ(, λ ()) u () In order to compute Φ(,), for simplicity, we denote τ 1 (, ) Φ(, ) := 1 τ (, ). τ n (, ) To proceed, we discuss two cases. (i) For 0, we compute τ 1 (, ) = φ(, λ 1()) = φ(, λ 1()) λ 1 () = φ(, λ 1()) λ 1 (). + φ(, λ ()) λ 1 () + φ(, λ ()) λ () + φ(, λ ()) λ () 7 := b λ ()

8 and τ 1 (, ) = φ(, λ 1()) i i = φ(, λ 1()) λ 1 () λ 1 () i = φ(, λ 1()) i λ 1 () Moreover, = = τ i (, ) = ( φ(, λ ()) λ () ( φ(, λ ()) ( φ(, λ ()) + φ(, λ ()) i + φ(, λ ()) λ () + φ(, λ ()) λ () φ(, λ 1()) λ 1 () φ(, λ 1()) λ () i i ) i ) i := c i, i =,, n. φ(, λ ) 1()) i = c i, i =,, n. Similarly, we have ( ) ( τ (, ) φ(, λ ()) = φ(, λ ) 1()) + (φ(, λ ()) φ(, λ 1 ())) = b ( 1 + (φ(, λ ()) φ(, λ 1 ())) ) 3 = a + (b a), where a means a := φ(, λ ()) φ(, λ 1 ()). In general, mimicking the same derivation λ () λ 1 () yields { τ i (, ) a + (b a) i i = if i = j, j (b a) i j if i j. To sum up, we obtain which is the desired result. Φ(, ) (ii) For = 0, it is clear to see = b c T c ai + (b a) T τ 1 (, ) = φ(, 1) and τ 1(, ) i = 0 for i =,, n. 8

9 Since τ i (, ) = 0 for i =,, n, it gives τ i(,) = 0. Moreover, τ (, ) τ (, 1,, 0,, 0) τ (, 1, 0,, 0) = lim 0 φ(, 1 + ) φ(, 1 ) = lim 0 Thus, we obtain = lim 0 = lim 0 = lim 0 = φ(, 1). which is equivalent to saying φ(, 1 + ) φ(, 1 ) φ(, 1 + ) φ(, 1 ) ( ) ( ) φ(, 1 + ) ( 1 + ) { τ i (, ) = j + φ(, 1 ) ( 1 ) φ(, 1) if i = j, 0 if i j. (as L Hopital s rule) Φ(, ) = φ(, 1) I. From all the above, we conclude that φ(, 1 ) Φ(, ) I if = 0, = b c T c ai + (b a) if T 0, Thus, the proof is complete. Now, we are ready to answer the question about what kind of smoothing functions can be adopted in the smoothing type algorithm. Two technical lemmas are needed towards the answer. Lemma.1. Suppose that M, N IR n n. Let σ min (M) denote the minimum singular value of M, and σ ma (N) denote the maimum singular value of N. Then, the following hold. (a) σ min (M) > σ ma (N) if and only if σ min (M T M) > σ ma (N T N). (b) If σ min (M T M) > σ ma (N T N), then M T M N T N is positive definite. 9

10 Proof. The proof is straightforward or can be found in usual tetbook of matri analysis, so we omit it here. Lemma.. Let A, S IR n n and A be symmetric. Suppose that the eigenvalues of A and SS T are arranged in non-increasing order. Then, for each k = 1,,, n, there eists a nonnegative real number θ k such that λ min (SS T ) θ k λ ma (SS T ) and λ k (SAS T ) = θ k λ k (A). Proof. Please see 15, Corollary for a proof. We point out that the crucial key, which guarantees a smoothing function can be employed in the smoothing type algorithm, is the nonsingularity of the Jacobian matri H (, )) given in (7). As below, we provide under what condition the Jacobian matri H (, )) is nonsingular. Theorem.1. Consider a SOCAVE () with σ min (A) > σ ma (B). Let H be defined as in (6). Suppose that φ : IR ++ IR IR is a smoothing function of t. If 1 d φ(, t) 1 dt is satisfied, then the Jacobian matri H (, ) is nonsingular for any > 0. Proof. From the epression of H (, ) given as in (7), we know that H (, ) is nonsingular if and only if the matri A + B Φ(,) is nonsingular. Thus, it suffices to show that the matri A + B Φ(,) is nonsingular under the conditions. Suppose not, that is, there eists a vector 0 v IR n such that Φ(, ) A + B v = 0 which implies that T Φ(, ) v T A T Av = v T B T Φ(, ) B v. (9) For convenience, we denote C := Φ(,). Then, it follows that v T A T Av = v T C T B T BCv. Applying Lemma., there eists a constant ˆθ such that λ min (C T C) ˆθ λ ma (C T C) and λ ma (C T B T BC) = ˆθλ ma (B T B). Note that if we can prove that 0 λ min (C T C) λ ma (C T C) 1, we will have λ ma (C T B T BC) λ ma (B T B). Then, by the assumption that the minimum singular value of A strictly eceeds the maimum singular value of B (i.e., σ min (A) > 10

11 σ ma (B)) and applying Lemma.1, we obtain v T A T Av > v T C T B T BCv. This contradicts the identity (9), which shows the Jacobian matri H (, ) is nonsingular for > 0. Thus, in light of the above discussion, it suffices to claim 0 λ min (C T C) λ ma (C T C) 1. To this end, we discuss two cases. Case 1: For = 0, we compute that C = φ(, 1) I. Since 1 φ(, 1) that 0 λ(c T C) 1 for > 0. Then, the claim is done. 1, it is clear Case : For 0, using the fact that the matri M T M is always positive semidefinite for any matri M IR m n, we see that the inequality λ min (C T C) 0 always holds. In order to prove λ ma (C T C) 1, we need to further argue that the matri I C T C is positive semidefinite. First, we write out 1 b c bc T I C T C = bc (1 a )I + (a b c ). T If 1 < φ(,λ i()) < 1, then we obtain ( φ(, b + c = 1 ) ( ) λ1 ()) φ(, λ ()) + < 1. This indicates that 1 b c > 0. By considering 1 b c as an 1 1 matri, this says 1 b c is positive definite. Hence, its Schur complement can be computed as below: (1 a )I + (a b c ) T 4b c T 1 b c ( = (1 a ) I ) ) T + (1 b c 4b c T 1 b c. (10) On the other hand, by the Mean Value Theorem, we have φ(, λ ()) φ(, λ 1 ()) = φ(, ξ) (λ () λ 1 ()), ξ where ξ (λ 1 (), λ ()). To proceed, we need to further discuss two subcases. (1) When 1 < φ(,ξ) < 1, we know φ(, λ ξ ()) φ(, λ 1 ()) < λ () λ 1 (). This together with (8) implies that 1 a > 0 for any > 0. In addition, for any > 0, we observe that (1 b c ) 4b c = (1 (b c) )(1 (b + c) ) ( ) ( ) φ(, λ1 ()) φ(, λ ()) = 1 1 > 0. 11

12 With all of these, we verify that the Schur complement ( of 1 b ) c given as in (10) is a linear positive combination of the matrices I T and T, which yields that the Schur complement (10) of 1 b c is positive semidefinite. Hence, the matri I C T C is also positive semidefinite, which is equivalent to saying 0 λ min (C T C) λ ma (C T C) 1. () When φ(,ξ) ξ = ±1, we have 1 a = 0, and (1 b c ) 4b c > 0. Since the matri T is positive semidefinite, the matri I C T C is positive semidefinite. Hence, 0 λ min (C T C) λ ma (C T C) 1. { { φ(,λ1 ()) If either 1 = ±1 φ(,λ1 ()) φ(,λ ()) or 1 = ±1 φ(,λ = ±1 ()), then we have b = ±1, c = 0 or = 1 b = 0, c = 1, which yields b + c = 1. Again, two subcases are needed. (1) When 1 < φ(,ξ) < 1, we have φ(, λ ξ ()) φ(, λ 1 ()) < λ () λ 1 (). This implies that 1 a > 0 for any > 0. Therefore I C T C = (1 a ) ( ) I T Since the matri I T is positive semidefinite, the matri I C T C is positive semidefinite. Hence, 0 λ min (C T C) λ ma (C T C) 1. () When φ(,ξ) ξ = ±1, we have I C T C = 0, which leads to λ(c T C) = 1. From all the above, the proof is complete., We point out that the condition σ min (A) > σ ma (B) in Theorem.1 guarantees the unique solution according to 35, Theorem 4.1. From Theorem.1, we realize that for a SOCAVE () with σ min (A) > σ ma (B), any smoothing function of t with 1 d φ(, t) 1 will be good for serving in the smoothing Newton algorithm when dt solving the above SOCAVE. With this, it is easy to find or construct smoothing functions of t satisfying the above condition. One popular approach is a smoothing approimation via convolution for the absolute value function 1,, 38, 45, which is described as below. First, we construct a smoothing approimation for the plus function (t) + = ma{0, t}. Then, we consider the piecewise continuous function d(t) with finite number of pieces, which is a density (kernel) function. In other words, it satisfies d(t) 0 and 1 + d(t)dt = 1.

13 With this d(t), we further define ŝ(t, ) := 1 d ( + t ), where is a positive parameter. If t d(t)dt < +, then a smoothing approimation for (t) + is formed. In particular, ˆp(t, ) = + (t s) + ŝ(s, )ds = t (t s)ŝ(s, )ds (t) +. The following are four well-known smoothing functions for the plus function 1, 38: ( ) ˆφ 1 (, t) = t + ln 1 + e t. (11) ˆφ (, t) = ˆφ 3 (, t) = ˆφ 4 (, t) = where the corresponding kernel functions are t if t ( ), 1 t + if < t <, (1) 0 if t. 4 + t + t. (13) d 1 (t) = d (t) = d 3 (t) = d 4 (t) = t if t >, t if 0 t, 0 if t < 0. e (1 + e ). { 1 if 1 1, 0 otherwise.. ( + 4) 3 { 1 if 0 1, 0 otherwise. Net, in light of t = (t) + + ( t), the smoothing function of t via convolution can be written as ˆp( t, ) = ˆp(t, ) + ˆp( t, ) = + t s ŝ(s, )ds. Analogous to (11)-(14), we achieve the following smoothing functions for t : ( ) ( φ 1 (, t) = ln 1 + e t + ln 1 + e t φ (, t) = (14) ). (15) t if t, t + 4 if < t <, t if t. (16) φ 3 (, t) = 4 + t. (17) { t if t, φ 4 (, t) = t (18) if t >. 13

14 If we take a Epanechnikov kernel function { 3 K(t) = (1 4 t ) if t 1, 0 if otherwise, then we obtain the following smoothing function for t : t if t >, φ 5 (, t) = t4 + 3t + 3 if t, t if t <. (19) Moreover, taking a Gaussian kernel function K(t) = 1 π e t ŝ(t, ) := 1 ( ) t K 1 = t π e, for all t IR yields and it leads to the smoothing function 45 for t : ( ) t t φ 6 (, t) = terf + π e, (0) where the error function is defined by erf(t) = t e u du π 0 t IR. In summary, we have constructed si smoothing functions from the above discussions. Can all the above functions serve as smoothing functions for solving SOCAVE? The answer is affirmative because it is not hard to verify that each φ i possesses 1 d dt φ i(, t) 1. Thus, these si functions will be adopted for our numerical implementations. Accordingly, we need to define Φ i (, ) and H i (, ) based on each φ i. For subsequent needs, we only present the epression of each Jacobian matri H i(, ) without detailed derivations. Based on each φ i, let Φ i : IR IR n IR n for i = 1,,, 6 be similarly defined as in (5), i.e Φ i (, ) = φ i (, λ 1 ()) u (1) + φ i (, λ ()) u () (1) and H i : IR IR n IR n for i = 1,,, 6 be similarly defined as in (6), i.e H i (, ) =, IR A + BΦ i (, ) b ++ and IR n. () Then, each H i is continuously differentiable on IR ++ IR n with the Jacobian matri given by H i(, 1 0 ) = B Φ i(,) A + B Φ i(,) (3) for all (, ) IR ++ IR n with = ( 1, ) IR IR n 1. Moreover, the differentation of each Φ i is epressed as below. 14

15 (1) The Jacobian of Φ 1 is characterized as below. Φ 1 (, ) = φ 1(, λ 1 ()) = with u (1) φ 1 (, λ 1 ()) + λ 1() Φ 1 (, ) + φ 1(, λ ()) 1 e λ 1 () 1 + e λ 1 () = 1 e 1 1 e +1 u () u (1) + φ 1 (, λ ()) + λ () I if = 0, b 1 c T 1 c 1 a 1 I + (b 1 a 1 ) if T 0, a 1 = φ 1(, λ ()) φ 1 (, λ 1 ()), λ () λ 1 () ) (e b 1 = 1 λ 1 () 1 + e λ() 1, e λ 1 () + 1 e λ () + 1 ) (1 c 1 = 1 e λ 1 () + e λ() 1. e λ 1 () + 1 e λ () + 1 () The Jacobian of Φ is characterized as below. 1 e λ () 1 + e λ () u (). Φ (, ) = φ (, λ 1 ()) u (1) + φ (, λ ()) u () with φ (, λ i ()) Φ (, ) = ( λi () 0 if λ i () ), + 1 if < λ 4 i() <, 0 if λ i (). di if = 0, = b c T c a I + (b a ) if T 0, 15

16 with a = φ (, λ ()) φ (, λ 1 ()), λ () λ 1 () 0 if λ () > λ 1(), 1 if λ () > λ 1 (), λ 1 () + 1 if λ () b = > λ 1() >, λ 1 ()+λ () if > λ () > λ 1 () >, λ () 1 if > λ () > λ 1(), 1 if λ 1 () < λ (), 1 if λ () > λ 1(), 0 if λ () > λ 1 (), 1 c = λ 1() if λ () > λ 1() >, λ () λ 1 () if > λ () > λ 1 () >, λ () + 1 if > λ () > λ 1(), 0 if λ 1 () < λ (), d = 1 if 1, 1 if < 1 <, 1 if 1. (3) The Jacobian of Φ 3 is characterized as below. with Φ 3 (, ) = = Φ 3 (, ) b 3 = 1 ( λ 1() u(1) λ () u() I if = 0, b 3 c T 3 c 3 a 1 I + (b 1 a 1 ) T if 0, a 3 = φ 3(, λ ()) φ 3 (, λ 1 ()), λ () λ 1 () ( λ 1 () 4 + λ 1() + c 3 = 1 λ 1 () 4 + λ 1() + λ () 4 + λ () ) ) λ (). 4 + λ (), (4) The Jacobian of Φ 4 is characterized as below. Φ 4 (, ) = φ 4(, λ 1 ()) u (1) + φ 4(, λ ()) u () 16

17 with with φ 4 (, λ i ()) Φ 4 (, ) 1 if λ i () >, ( ) = 1 λi () if λ i (), 1 if λ i () <. ei if = 0, = b 4 c T 4 c 4 a 4 I + (b 4 a 4 ) if T 0, a 4 = φ 4(, λ ()) φ 4 (, λ 1 ()), λ () λ 1 () b 4 = c 4 = e = 0 if λ () > > > λ 1 (), 1 if λ () > λ 1 () >, λ 1 () + 1 if λ () > λ 1 (), λ 1 ()+λ () if λ () > λ 1 (), λ () 1 if λ () > λ 1 (), 1 if λ 1 () < λ () <. 1 if λ () > > > λ 1 (), 0 if λ () > λ 1 () >, 1 λ 1() if λ () > λ 1 (), λ () λ 1 () if λ () > λ 1 (), λ () + 1 if λ () > λ 1 (), 0 if λ 1 () < λ () <, 1 if 1 >, 1 if 1, 1 if 1 <. (5) The Jacobian of Φ 5 is characterized as below. with Φ 5 (, ) φ 5 (, λ i ()) Φ 5 (, ) = φ 5(, λ 1 ()) u (1) = 3 8 ( ( λi () + φ 5(, λ ()) u () 0 if λ i () >, ) ) 1 if λ i (), 0 if λ i () <. ei if = 0, = b 5 c T 5 c 5 a 5 I + (b 5 a 5 ) if T 0, 17

18 with a 5 = φ 5(, λ ()) φ 5 (, λ 1 ()), λ () λ 1 () 0 if λ () > > > λ 1 (), 1 if λ () > λ 1 () >, ( ) 3 1 λ1 () λ 1 () + 1 if λ 4 () > λ 1 (), b 5 = 1 λ 3 1 ()+λ3 () + 3 λ 1 ()+λ () if λ () > λ 1 (), ( ) 3 1 λ () λ () 1 if λ 4 () > λ 1 (), 1 if λ 1 () < λ () <. 1 if λ () > > > λ 1 (), 0 if λ () > λ 1 () >, ( ) λ1 () 4 3 λ 1 () if λ 4 () > λ 1 (), c 5 = e = 1 λ 3 () λ3 1 () + 3 λ ()+λ 1 () ( 1 λ () 4 1 ( 1 if λ () > λ 1 (), ) λ () + 1 if λ 4 () > λ 1 (), 0 if λ 1 () < λ () <, 1 if 1 >, 1 ) if 1, 1 if 1 <. (6) The Jacobian of Φ 6 is characterized as below. with Φ 6 (, ) Φ 6 (, ) λ 1 = () π e u (1) λ + () π e u () ( ) erf 1 I if = 0, = b 6 c T 6 if 0, c 6 a 6 I + (b 6 a 6 ) T a 6 = φ 6(, λ ()) φ 6 (, λ 1 ()), λ () λ 1 () b 6 = 1 ( ( ) ( )) λ1 () λ () erf + erf, c 6 = 1 ( erf ( ) ( )) λ () λ1 () erf. 18

19 3 Smoothing Newton method In this section, we study the smoothing Newton algorithm based on the smoothing function Φ i (, ) for i {1,,..., 6} to solve the SOCAVE (), and show its convergence properties. First, we present the generic framework of the smoothing Newton algorithm. Algorithm 3.1. (A Smoothing Newton Algorithm) Step 0 Choose δ (0, 1), σ (0, 1), and 0 IR ++, 0 IR n. Set z 0 := ( 0, 0 ), e := (1, 0) IR IR n 1. Choose β > 1 satisfying min{1, H i (z 0 ) } β 0. Set k := 0. Step 1 If H i (z k ) = 0, stop. Otherwise, set τ k := min{1, H i (z k ) }. Step Compute z k = ( k, k ) IR IR n by H i (z k ) + H i(z k ) z k = 1 β τ k e, (4) where H i(z k ) denotes the Jacobian matri of H i (z k ) at ( k, k ) given by (7). Step 3 Let α k be the maimum of the values 1, δ, δ, such that H i (z k + α k z k ) 1 σ(1 1 β )α k H i (z k ). (5) Step 4 Set z k+1 := z k + α k z k and k := k + 1. Go to Step 1. Theorem.1 indicates the Newton equation (4) in Algorithm 3.1 is solvable. It paves a way to show that the linear search (5) in Algorithm 3.1 is well-defined which is presented in Theorem 3.1 as below. More specifically, from 17, Lemma 3.1, we know that there eists an ᾱ (0, 1 such that (5) holds for any α (0, ᾱ. This indicates that taking α = ma{1, δ, δ,... }, where δ (0, 1) in step 3, leads to (5) being well-defined. Indeed, the detailed arguments are very similar to those in 16, 18, 36, we only state it here and omit its proof. Theorem 3.1. Consider a SOCAVE () with σ min (A) > σ ma (B). IR IR n given by (4), the linear search (5) is well-defined. Then, for z Net, we discuss the convergence of Algorithm 3.1. To this end, we need the following results whose arguments are also similar to the ones in 18, Remark.1. In particular, for Theorem 3.(d), we provide a proof in light of the structure of each φ i so that the readers can look into the analytic difference among them. 19

20 Theorem 3.. Consider a SOCAVE () with σ min (A) > σ ma (B). Let H i be defined as in (). Suppose that the sequence {z k } is generated by Algorithm 3.1. Then, the following results hold. (a) The sequences { H i (z k ) } and {τ k } are monotonically non-increasing. (b) β k τ k for all k. (c) The sequence { k } is monotonically non-increasing and k > 0 for all k. (d) The sequence {z k } is bounded. Proof. (a) From definition of the line search in (5) and τ k := min{1, H i (z k ) }, it is clear that { H i (z k ) } and {τ k } are monotonically non-increasing. (b) We prove this conclusion by induction. First, by Algorithm 3.1, it is clear that τ0 β 0 with τ 0, β and 0 chosen in Algorithm 3.1. Secondly, we suppose that τk β k for some k. Then, for k + 1, we have k+1 τ k+1 β = k + α k k τ k+1 β τk = (1 α k ) k + α k β τ k+1 β (1 α k ) τ k β + α τk k β τ k+1 β 0, where the second equality holds due to the Newton equation (4), and the second inequality holds due to part (a). Hence, it follows that β k τk for all k. (c) From the iterative scheme z k+1 = z k + α k z k, we know k+1 = k + α k k. By the Newton equations (4) and the line search as in (5) again, it follows that τk k+1 = (1 α k ) k + α k β (1 α k) τ k β + α τk k β > 0 for all k. On the other hand, we have k+1 = (1 α k ) k + α k τ k β (1 α k) k + α k k k, where the first inequality holds due to part (b). Hence, the sequence { k } is monotonically non-increasing and k > 0 for all k. 0

21 (d) From part (a), we know the sequence { H i (z k ) } is bounded, which means there is a constant C such that H i (z k ) C. Thus, C H i (z k ) A k + BΦ i ( k, k ) b A k BΦi ( k, k ) b = ( k ) T A T A k Φ i ( k, k ) T B T BΦ i ( k, k ) b λ min (A T A) k λ ma (B T B) Φ i ( k, k ) b = λ min (A T A) k λ ma (B T B) φ i ( k, λ 1 ( k ))u (1) + φ i ( k, λ ( k ))u () b = λ min (A T A) k λ ma (B T B) φ i ( k, λ 1 ( k )) u (1) + φ i ( k, λ ( k )) u () b = λ min (A T A) k λ ma (B T B) 1 φ i ( k, λ 1 ( k )) + φ i ( k, λ ( k )) b. On the other hand, for i = 1,, 3, 4, 5, we see that φ i ( k, λ 1 ( k )) + φ i ( k, λ ( k )) = f i (, λ j ( k )) + λ 1( k ) + λ ( k ). j=1 (i) For i = 1, we have f 1 (, λ j ( k )) = 4 λ k ln(e j ( k ) k + 1) ln(e λ j (k ) k + 1). It is known that the function g(t) = 4 ln(e t + 1) ln(e t + 1) is bounded for all t IR. It follows that there eists N 1 such that j=1 f 1(, λ j ( k )) k N 1. (ii) For i =, 4, 5, it is easy to verify that there eist N i such that j=1 f i(, λ j ( k )) k N i. For i = 3, we have j=1 f i(, λ j ( k )) = 8 k := k N 3, which yields φ i ( k, λ 1 ( k )) + φ i ( k, λ ( k )) k N i + λ 1( k ) + λ ( k ) ( ) = N i k + k ( ) Ni k + k. This together with H i (z k ) C implies that k C + λ ma (B T N B) i k + b λmin (A T A) λ ma (B T B) 1

22 holds for all k. Thus, the sequence { k } is bounded. (iv) For i = 6, we know that erf(t) 1 and 0 < e t 1. Thus, it leads to φ 6( k, λ 1 ( k )) + φ 6( k, λ ( k )) ( ) ( λ 1 ( k ) + π k + λ ( k ) + ( k π + k ) ) π k where the last inequality is due to λ 1 ( k ) + λ ( k ) k. Then, it follow that k C + λ ma (B T B) k π + b λmin (A T A) λ ma (B T B) holds for all k. Thus, the sequence { k } is bounded. From all the above, the proof is complete. Now, we shall show that any sequence {z k } is generated by Algorithm 3.1 convergent to a solution to the SOCAVE (). In the net theorem we demonstrate that under our assumptions. The proof is essentially similar to a result 13, 36, Theorem 4.1. Hence, we omit the detailed proof and only present the convergence result. Theorem 3.3. Consider a SOCAVE () with σ min (A) > σ ma (B). Suppose that {z k } is generated by Algorithm 3.1. Then, any accumulation point of {z k } is a solution to the SOCAVE (). Algorithm 3.1 possesses the local quadratic convergence rate. In fact, we can achieve it by similar arguments as those in 36, 40, Theorem 8. Theorem 3.4. Consider a SOCAVE () with σ min (A) > σ ma (B). Let H i be defined as in (6) and z be the unique solution to SOCAVE (). Suppose that all V H i (z ) are nonsingular. Then, the whole sequence {z k } converges to z, and z k+1 z = O( z k z ). 4 Numerical Results In this section, we report some numerical results via five numerical eamples to evaluate the efficiency of Algorithm 3.1. First, In our eperiments, we set parameters as 0 = 0.1, 0 = rand(n, 1), δ = 0.5, σ = 10 5 and β = ma(1, 1.01 τ 0 /).

23 We stop the iterations when H(z k ) 10 6 or the number of iterations eceeds 100. All the eperiments are done on a PC with Intel(R) CPU of.40ghz and RAM of 4.00GHz, and all the programming codes are written in Matlab and run in Matlab environment. For each problem, we implement the smoothing Newton Algorithm 3.1 with si different smoothing functions φ 1 (, t), φ (, t), φ 3 (, t), φ 4 (, t), φ 5 (, t), φ 6 (, t), respectively. Each problem is randomly generated 50 times and the average results are listed in Tables, where n denotes the size of problem, itn denotes the average number of iterations, time denotes the average value of the CPU time in seconds and fails means the number of failures. Secondly, in order to compare the performance of smoothing function φ i (, t), for i = 1,, 3, 4, 5, 6 in the smoothing Newton Algorithm 3.1, we adopt the performance profile which is introduced in 7 as a means. In other words, we regard Algorithm 3.1 corresponding to a smoothing function φ i (, t), for i = 1,, 3, 4, 5, 6 as a solver, and assume that there are n s solvers and n p test problems from the test set P which is generated randomly. We are interested in using the iteration number and computing time as performance measure for Algorithm 3.1 with different φ i (, t). For each problem p and solver s, let f p,s = iteration number(or computing time) required to solve problem p by solver s. We employ the performance ratio r p,s := f p,s min{f p,s : s S}, where S is the four solvers set. We assume that a parameter r p,s r M for all p, s are chosen, and r p,s = r M if and only if solver s does not solve problem p. In order to obtain an overall assessment for each solver, we define ρ s (τ) := 1 n p size{p P : r p,s τ}, which is called the performance profile of the number of iteration for solver s. Then, ρ s (τ) is the probability for solver s S that a performance ratio f p,s is within a factor τ IR of the best possible ratio. The performance profiles of each problem are depicted in Figures Problem 4.1. Consider the SOCAVE () which is generated in the following way: first choose two random matrices B, C IR n n from a uniformly distribution on 10, 10 for every element. We compute the maimal singular value σ 1 of B and the minimal singular value σ of C, and let σ := min{1, σ /σ 1 }. Net, we divide C by σ multiplied by a random number in the interval 0, 1, and the resulting matri is denoted as A. 3

24 Accordingly, the minimum singular values of A eceeds the maimal singular value of B. We choose randomly b IR n on 0, 1 for every element. By Algorithm 3.1 in this paper, the resulting SOCAVE () is solvable. The initial point is chosen in the range 0, 1 entry-wisely. Note that a similar way to construct the problem was given in ρs(τ) ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 1: Performance profile of iteration numbers of Problem ϕ ϕ ρs(τ) ϕ ϕ 4 ϕ ϕ τ Figure : Performance profile of computing time of Problem 4.1. Table 1 and Figures 1- show that function φ 1 (, t) performs worst, whereas the difference among other functions is very slight. Figure 1 demonstrates the performance profile of iteration numbers for Problem 4.1. The subplot in Figure 1 is the zoomed plot for upper-left part of the Figure 1. Figure shows the performance profile of computing time for Problem 4.1. The subplot in Figure is the zoomed plot for lower-left part of Figure. From this figure, we can see that the performance of of function φ 1 (, t) is also the worst one. 4

25 Table 1: Numerical results for Problem 4.1 φ1 φ φ3 φ4 φ5 φ6 n itn time fails itn time fails itn time fails itn time fails itn time fails itn time fails

26 Problem 4.. Consider the SOCAVE () which is generated in the following way: choose two random matrices C, D IR n n from a uniformly distribution on 10, 10 for every element, and compute their singular value decompositions C := U 1 S 1 V1 T and D := U S V T with diagonal matrices S 1 and S ; unitary matrices U 1, V 1, U and V. Then, we choose randomly b, c IR n on 0, 10 for every element. Net, we take a IR n by setting a i = c i + 10 for all i {1,..., n}, so that a b. Set A := U 1 Diag(a)V1 T and B := U Diag(b)V T, where Diag() denotes a diagonal matri with its i-th diagonal element being i. The gap between the minimal singular value of A and the maimal singular value of B is limited and can be very small. We choose randomly b IR n in 0, 10. The initial point is chosen in the range 0, 1 entry-wisely ρs(τ) ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 3: Performance profile of iteration numbers of Problem ρs(τ) τ ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ 6 Figure 4: Performance profile of computing time of Problem 4.. For Problem 4., as depicted in Figures 3-4 and Table, all the smoothing functions φ i (, t) for i = 1,, 3, 4, 5, 6 perform very well, no any discrepancy. 6

27 Table : Numerical results for Problem 4. φ1 φ φ3 φ4 φ5 φ6 n itn time fails itn time fails itn time fails itn time fails itn time fails itn time fails

28 Problem 4.3. Consider the SOCAVE () which is generated in the following way: choose two random matrices A, B IR n n from a uniformly distribution on 10, 10 for every element. In order to ensure that the SOCAVE () is solvable, we update the matri A by the following: let USV = svd(a). If min{s(i, i)} = 0 for i = 0, 1,, n, we make A = U(S E)V, and then A = λma(bt B)+0.01 λ min A. We choose randomly b IR n on (A T A) 0, 10 for every element. The initial point is chosen in the range 0, 1 entry-wisely ρs(τ) ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 5: Performance profile of iteration numbers of Problem ρs(τ) ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 6: Performance profile of computing time of Problem 4.3. For Problem 4.3, from the Table 3 and Figures 5-6, we see that φ 1 (, t) is obviously inferior to other functions. Moreover, in terms of the computing time, the function φ 4 (, t) is the best one, followed by φ 3 (, t). In summary, the function φ 1 (, t) is still the worst performer. Problem 4.4. We consider the SOCAVE () which is generated the same as Problem 4.1. But, here the SOC is given by K := K n 1 K nr, where n 1 = = n r = n r. 8

29 Table 3: Numerical results for Problem 4.3 φ1 φ φ3 φ4 φ5 φ6 n itn time fails itn time fails itn time fails itn time fails itn time fails itn time fails

30 ρs(τ) ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 7: Performance profile of iteration numbers of Problem ϕ 1 ρs(τ) ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 8: Performance profile of computing time of Problem 4.4. Figures 7-8 show the performance profiles of Problem 4.4. Indeed, the performance profiles are similar to those for Problem 4.1 (only the cone structure is different). Again, the function φ 1 (, t) is still the worst performer and there is no significant difference among other five smoothing functions. Problem 4.5. We consider the SOCAVE () which is generated the same as Problem 4.3. But, here the SOC is given by K := K n 1 K nr, where n 1 = = n r = n r. Figures 9-10 show the performance profiles of Problem 4.5. performance of function φ 1 (, t) one more time. They verify the poor In summary, the function φ 1 (, t) is not a good choice to work with the smoothing Newton algorithm. Note that φ 1 (, t) is related to loss function and widely used in engineering like machine learning. However, for the SOCAVE, the numerical performance of function φ 1 (, t) is always the worst one. This is a very interesting phenomenon and 30

31 ρs(τ) ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 9: Performance profile of iteration numbers of Problem ρs(τ) ϕ 1 ϕ ϕ 3 ϕ 4 ϕ 5 ϕ τ Figure 10: Performance profile of computing time of Problem 4.5. discovery. In other words, we may try to replace it by other smoothing functions for some appropriate algorithms towards real engineering problems. This will be our future investigations. References 1 C. Chen and O. L. Mangasarian A class of smoothing functions for nonlinear and mied complementarity problems, Computational Optimization and Applications, vol. 5, pp , L. Caccetta, B. Qu, and G.-L. Zhou, A globally and quadratically convergent method for absolute value equations, Computational Optimization and Applications, vol. 48, pp ,

32 3 J.-S. Chen, X. Chen, and P. Tseng, Analysis of nonsmooth vector-valued functions associated with second-order cones, Mathematical Programming, vol. 101, pp , J.-S. Chen and P. Tseng, An unconstrained smooth minimization reformulation of second-order cone complementarity problem, Mathematical Programming, vol. 104, pp , J.-S. Chen, The conve and monotone functions associated with second-order cone, Optimization, vol. 55, pp , J.-S. Chen and S.-H. Pan, A survey on SOC complementarity functions and solution methods for SOCPs and SOCCPs, Pacific Journal of Optimization, vol. 8, pp , E.D. Dolan and J.J. More, Benchmarking optimization software with performance profiles, Mathematical Programming, vol. 91, pp , U. Faraut and A. Koranyi, Anlysis on Symmetric Cones, Oford Mathematical Monographs, Oford University Press, New York, M. Fukushima, Z.-Q. Luo, and P. Tseng, Smoothing functions for second-order cone complimentarity problems, SIAM Journal on Optimization, vol. 1, pp , F. Facchinei and J. Pang, Finite-dimensional Variational Inequalities and Complementarity Problems, Springer, New York, L.-C. Kong, J. Sun, and N.-H. Xiu, A regularized smoothing Newton method for symmetric cone complementarity problems, SIAM Journal on Optimization, vol. 19, pp , S.-L. Hu and Z.-H. Huang, A note on absolute value equations, Optimization Letters, vol. 4, pp , S.-L. Hu, Z.-H. Huang, and P. Wang, A nonmonotone smoothing Newton algorithm for solving nonlinear complementarity problems, Optimization Methods and Software, vol. 4, pp , S.-L. Hu, Z.-H. Huang, and Q. Zhang, A generalized Newton method for absolute value equations associated with second order cones, Journal of Computational and Applied Mathematics, vol. 35, pp , R.A. Horn and C.R. Johnson, Matri analysis, Cambridge University Press, Cambridge,

33 16 Z.-H. Huang, Locating a maimally complementary solution of the monotone NCP by using non-interior-point smoothing algorithms, Mathematical Methohs of Operations Research, vol. 61, pp , Z.-H. Huang, J. Han, and Z. Chen, A predictor-corrector smoothing Newton algorithm, based on a new smoothing function, for solving the nonlinear complementarity problem with a P 0 function, Journal of Optimization Theory and Applications, vol. 117, pp , Z.-H. Huang, Y. Zhang, and W. Wu, A smooting-type algorithm for solving system of inequalities, Journal of Computational and Applied Mathematics, vol. 0, pp , X.-Q. Jiang, A smoothing Newton method for solving absolute value equations, Advanced Materials Research, vol , pp , X.-Q. Jiang and Y. Zhang, A smoothing-type algorithm for absolute value equations, Journal of Industrial and Management Optimization, vol. 9, pp , S. Ketabchi and H. Moosaei, Minimum norm solution to the absolute value equation in the conve case, Journal of Optimization Theory and Applications, vol. 154, pp , 01. J. Kreimer and R. Y. Rubinstein, Nondifferentiable optimization via smooth approimation: General analytical approach, Annals of Operations Research, vol. 39, pp , X.-H. Liu and W.-Z. Gu, Smoothing Newton algorithm based on a regularized one-parametric class of smoothing functions for generalized complementarity problems over symmetric cones, Journal of Industrial and Management Optimization, vol. 6, pp , N. Lu and Z.-H. Huang, Convergence of a non-interior continuation algorithm for the monotone SCCP, Acta Mathematicae Applicatae Sinica (English Series), vol. 6, pp , N. Lu and Y. Zhang, A smoothing-type algorithm for solving inequalities under the order induced by a symmetric cone, Journal of Inequalities and Applications, Article 011:4, O.L. Mangasarian, Absolute value programming, Computational Optimization and Applications, vol. 36, pp , O.L. Mangasarian, Absolute value equation solution via concave minimization, Optimization Letters, vol. 1, pp. 3 5,

34 8 O.L. Mangasarian, A generalized Newton method for absolute value equations, Optimization Letters, vol. 3, pp , O.L. Mangasarian, Knapsack feasibility as an absolute value equation solvable by successive linear programming, Optimization Letters, vol. 3, pp , O.L. Mangasarian, Primal-dual bilinear programming solution of the absolute value equation, Optimization Letters, vol. 6, pp , O.L. Mangasarian, Absolute value equation solution via dual complementarity, Optimization Letters, vol. 7, pp , O.L. Mangasarian, Linear complementarity as absolute value equation solution, Optimization Letters, vol. 8, pp , O.L. Mangasarian, Absolute value equation solution via linear programming, Journal of Optimization Theory and Applications, vol. 161, pp , O. L. Mangasarian and R. R. Meyer, Absolute value equation, Linear Algebra and Its Applications, vol. 419, pp , X.-H. Miao, W.-M. Hsu, and J.-S. Chen, The solvabilities of three optimization problems associated with second-order cone, submitted manuscript, X.-H. Miao, J.-T. Yang, B. Saheya and J.-S. Chen, A smoothing Newton method for absolute value equation associated with second-order cone, Applied Numerical Mathematics, vol. 10, October, pp. 8 96, O. A. Prokopyev, On equivalent reformulations for absolute value equations, Computational Optimization and Applications, vol. 44, , L. Qi and D. Sun, Smoothing functions and smoothing Newton method for complementarity and variational inequality problems, Journal Of Optimization Theory And Applications, vol. 113, pp , L. Qi and J. Sun, A nonsmooth version of Newton s method, Mathematical Programming, vol. 58, pp , L. Qi, D. Sun, and G.L. Zhou, A new look at smoothing Newton methods for nonlinear complementarity problems and bo constrained variational inequality problems, Mathematical Programming, vol. 87, pp. 1 35, J. Rohn, A theorem of the alternative for the equation A + B = b, Linear and Multilinear Algebra, vol. 5, pp ,

35 4 J. Rohn, Solvability of systems of interval linear equations and inequalities, in Linear Optimization Problems with Ineact Data edited by. M. Fiedler, J. Nedoma, J. Ramik, J. Rohn and K. Zimmermann, Springer, pp , J. Rohn, An algorithm for solving the absolute value equation, Eletronic Journal of Linear Algebra, vol. 18, , B. Saheya, C.-H. Yu and J.-S. Chen, Numerical comparisons based on four smoothing functions for absolute value equation, Journal of Applied Mathematics and Computing, DOI: /s , S. Voronin, G. Ozkaya, and D. Yoshida, Convolution based smooth approimations to the absolute value function with application to non-smooth regularization, arxiv: v math.na 1, July, S. Yamanaka and M. Fukushima, A brancd and bound method for the absolute value programs, Optimization, vol. 63, pp , J.G. Zhu, H.W. Liu, and X.L. Li, A regularized smoothing-type algorithm for solving a system of inequalities with a P 0 -function, Journal of Computational and Applied Mathematics, vol. 33, pp ,

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Error bounds for symmetric cone complementarity problems

Error bounds for symmetric cone complementarity problems to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin

More information

An improved generalized Newton method for absolute value equations

An improved generalized Newton method for absolute value equations DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Lingchen Kong, Levent Tunçel, and Naihua Xiu 3 (November 6, 7; Revised December, 7) Abstract Recently, Gowda et al. [] established

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Xin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China

Xin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China Pacific Journal of Optimization, vol. 14, no. 3, pp. 399-419, 018 From symmetric cone optimization to nonsymmetric cone optimization: Spectral decomposition, nonsmooth analysis, and projections onto nonsymmetric

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Absolute Value Equation Solution via Dual Complementarity

Absolute Value Equation Solution via Dual Complementarity Absolute Value Equation Solution via Dual Complementarity Olvi L. Mangasarian Abstract By utilizing a dual complementarity condition, we propose an iterative method for solving the NPhard absolute value

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Primal-Dual Bilinear Programming Solution of the Absolute Value Equation

Primal-Dual Bilinear Programming Solution of the Absolute Value Equation Primal-Dual Bilinear Programg Solution of the Absolute Value Equation Olvi L. Mangasarian Abstract We propose a finitely terating primal-dual bilinear programg algorithm for the solution of the NP-hard

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming Optimization Letters 1 10 (009) c 009 Springer. Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences Department

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Manual of ReSNA matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Shunsuke Hayashi September 4, 2013 1 Introduction ReSNA (Regularized

More information

EXAMPLES OF r-convex FUNCTIONS AND CHARACTERIZATIONS OF r-convex FUNCTIONS ASSOCIATED WITH SECOND-ORDER CONE

EXAMPLES OF r-convex FUNCTIONS AND CHARACTERIZATIONS OF r-convex FUNCTIONS ASSOCIATED WITH SECOND-ORDER CONE Linear and Nonlinear Analysis Volume 3, Number 3, 07, 367 384 EXAMPLES OF r-convex FUNCTIONS AND CHARACTERIZATIONS OF r-convex FUNCTIONS ASSOCIATED WITH SECOND-ORDER CONE CHIEN-HAO HUANG, HONG-LIN HUANG,

More information

A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION

A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION U.P.B. Sci. Bull., Series A, Vol. 78, Iss. 1, 2016 ISSN 1223-7027 A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION Hamid Esmaeili 1, Mahdi Mirzapour 2, Ebrahim Mahmoodabadi

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

A continuation method for nonlinear complementarity problems over symmetric cone

A continuation method for nonlinear complementarity problems over symmetric cone A continuation method for nonlinear complementarity problems over symmetric cone CHEK BENG CHUA AND PENG YI Abstract. In this paper, we introduce a new P -type condition for nonlinear functions defined

More information

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem Journal of Applied Mathematics Volume 2012, Article ID 219478, 15 pages doi:10.1155/2012/219478 Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity

More information

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department

More information

Linear Complementarity as Absolute Value Equation Solution

Linear Complementarity as Absolute Value Equation Solution Linear Complementarity as Absolute Value Equation Solution Olvi L. Mangasarian Abstract We consider the linear complementarity problem (LCP): Mz + q 0, z 0, z (Mz + q) = 0 as an absolute value equation

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem Math. Program., Ser. B 104, 93 37 005 Digital Object Identifier DOI 10.1007/s10107-005-0617-0 Jein-Shan Chen Paul Tseng An unconstrained smooth minimization reformulation of the second-order cone complementarity

More information

A Finite Newton Method for Classification Problems

A Finite Newton Method for Classification Problems A Finite Newton Method for Classification Problems O. L. Mangasarian Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI 53706 olvi@cs.wisc.edu Abstract A fundamental

More information

Some characterizations for SOC-monotone and SOC-convex functions

Some characterizations for SOC-monotone and SOC-convex functions J Glob Optim 009 45:59 79 DOI 0.007/s0898-008-9373-z Some characterizations for SOC-monotone and SOC-convex functions Jein-Shan Chen Xin Chen Shaohua Pan Jiawei Zhang Received: 9 June 007 / Accepted: October

More information

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p G. M. FUNG glenn.fung@siemens.com R&D Clinical Systems Siemens Medical

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras Optimization Methods and Software Vol. 00, No. 00, January 2009, 1 14 Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras M. Seetharama Gowda a, J. Tao

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

A Power Penalty Method for a Bounded Nonlinear Complementarity Problem

A Power Penalty Method for a Bounded Nonlinear Complementarity Problem A Power Penalty Method for a Bounded Nonlinear Complementarity Problem Song Wang 1 and Xiaoqi Yang 2 1 Department of Mathematics & Statistics Curtin University, GPO Box U1987, Perth WA 6845, Australia

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear

More information

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints Noname manuscript No. (will be inserted by the editor) Using quadratic conve reformulation to tighten the conve relaation of a quadratic program with complementarity constraints Lijie Bai John E. Mitchell

More information

A projection algorithm for strictly monotone linear complementarity problems.

A projection algorithm for strictly monotone linear complementarity problems. A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu

More information

GENERALIZED second-order cone complementarity

GENERALIZED second-order cone complementarity Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

Convex Optimization Overview (cnt d)

Convex Optimization Overview (cnt d) Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

arxiv: v2 [math.na] 27 Dec 2016

arxiv: v2 [math.na] 27 Dec 2016 An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

Properties of Solution Set of Tensor Complementarity Problem

Properties of Solution Set of Tensor Complementarity Problem Properties of Solution Set of Tensor Complementarity Problem arxiv:1508.00069v3 [math.oc] 14 Jan 2017 Yisheng Song Gaohang Yu Abstract The tensor complementarity problem is a specially structured nonlinear

More information

On cardinality of Pareto spectra

On cardinality of Pareto spectra Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela

More information

1. Sets A set is any collection of elements. Examples: - the set of even numbers between zero and the set of colors on the national flag.

1. Sets A set is any collection of elements. Examples: - the set of even numbers between zero and the set of colors on the national flag. San Francisco State University Math Review Notes Michael Bar Sets A set is any collection of elements Eamples: a A {,,4,6,8,} - the set of even numbers between zero and b B { red, white, bule} - the set

More information

Formulating an n-person noncooperative game as a tensor complementarity problem

Formulating an n-person noncooperative game as a tensor complementarity problem arxiv:60203280v [mathoc] 0 Feb 206 Formulating an n-person noncooperative game as a tensor complementarity problem Zheng-Hai Huang Liqun Qi February 0, 206 Abstract In this paper, we consider a class of

More information

1 Kernel methods & optimization

1 Kernel methods & optimization Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions

More information

Properties for the Perron complement of three known subclasses of H-matrices

Properties for the Perron complement of three known subclasses of H-matrices Wang et al Journal of Inequalities and Applications 2015) 2015:9 DOI 101186/s13660-014-0531-1 R E S E A R C H Open Access Properties for the Perron complement of three known subclasses of H-matrices Leilei

More information

Sparse Approximation via Penalty Decomposition Methods

Sparse Approximation via Penalty Decomposition Methods Sparse Approximation via Penalty Decomposition Methods Zhaosong Lu Yong Zhang February 19, 2012 Abstract In this paper we consider sparse approximation problems, that is, general l 0 minimization problems

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem Advances in Operations Research Volume 01, Article ID 483479, 17 pages doi:10.1155/01/483479 Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

Unsupervised Classification via Convex Absolute Value Inequalities

Unsupervised Classification via Convex Absolute Value Inequalities Unsupervised Classification via Convex Absolute Value Inequalities Olvi L. Mangasarian Abstract We consider the problem of classifying completely unlabeled data by using convex inequalities that contain

More information

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods

More information

Two unconstrained optimization approaches for the Euclidean κ-centrum location problem

Two unconstrained optimization approaches for the Euclidean κ-centrum location problem Noname manuscript No (will be inserted by the editor) Shaohua Pan and Jein-Shan Chen Two unconstrained optimization approaches for the Euclidean κ-centrum location problem Abstract Consider the single-facility

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators

On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators On the O1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators Bingsheng He 1 Department of Mathematics, Nanjing University,

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

A Residual Existence Theorem for Linear Equations

A Residual Existence Theorem for Linear Equations Optimization Letters manuscript No. (will be inserted by the editor) A Residual Existence Theorem for Linear Equations Jiri Rohn Received: date / Accepted: date Abstract A residual existence theorem for

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Given the vectors u, v, w and real numbers α, β, γ. Calculate vector a, which is equal to the linear combination α u + β v + γ w.

Given the vectors u, v, w and real numbers α, β, γ. Calculate vector a, which is equal to the linear combination α u + β v + γ w. Selected problems from the tetbook J. Neustupa, S. Kračmar: Sbírka příkladů z Matematiky I Problems in Mathematics I I. LINEAR ALGEBRA I.. Vectors, vector spaces Given the vectors u, v, w and real numbers

More information