A class of Smoothing Method for Linear Second-Order Cone Programming

Similar documents
A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

The Q Method for Symmetric Cone Programmin

Error bounds for symmetric cone complementarity problems

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Department of Social Systems and Management. Discussion Paper Series

A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints

A Smoothing Newton Method for Solving Absolute Value Equations

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Primal-dual path-following algorithms for circular programming

A derivative-free nonmonotone line search and its application to the spectral residual method

Spectral gradient projection method for solving nonlinear monotone equations

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Bulletin of the. Iranian Mathematical Society

Using Schur Complement Theorem to prove convexity of some SOC-functions

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

Largest dual ellipsoids inscribed in dual cones

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm

A double projection method for solving variational inequalities without monotonicity

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

The Q Method for Second-Order Cone Programming

12. Interior-point methods

Two unconstrained optimization approaches for the Euclidean κ-centrum location problem

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

The Jordan Algebraic Structure of the Circular Cone

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Lecture: Algorithms for LP, SOCP and SDP

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A projection-type method for generalized variational inequalities with dual solutions

Iterative common solutions of fixed point and variational inequality problems

GENERALIZED second-order cone complementarity

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

THE solution of the absolute value equation (AVE) of

Permutation invariant proper polyhedral cones and their Lyapunov rank

Second-Order Cone Programming

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

A continuation method for nonlinear complementarity problems over symmetric cone

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

Step lengths in BFGS method for monotone gradients

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

A new primal-dual path-following method for convex quadratic programming

Fixed points of the derivative and k-th power of solutions of complex linear differential equations in the unit disc

Xin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

Robust linear optimization under general norms

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

An interior-point gradient method for large-scale totally nonnegative least squares problems

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture: Introduction to LP, SDP and SOCP

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

An improved generalized Newton method for absolute value equations

New hybrid conjugate gradient methods with the generalized Wolfe line search

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Fixed point theorems of nondecreasing order-ćirić-lipschitz mappings in normed vector spaces without normalities of cones

EXAMPLES OF r-convex FUNCTIONS AND CHARACTERIZATIONS OF r-convex FUNCTIONS ASSOCIATED WITH SECOND-ORDER CONE

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem

Inverse Perron values and connectivity of a uniform hypergraph

Handout 8: Dealing with Data Uncertainty

SEMILINEAR ELLIPTIC EQUATIONS WITH DEPENDENCE ON THE GRADIENT

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

On Solving SOCPs with an Interior-Point Method for NLP

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

Interior-Point Methods for Linear Optimization

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On nonexpansive and accretive operators in Banach spaces

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Second-order cone programming

STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

An interior-point trust-region polynomial algorithm for convex programming

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

12. Interior-point methods

Optimization Tutorial 1. Basic Gradient Descent

Transcription:

Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin Zhu, and Huali Zhu Received 1 November 1; Published online 15 December 1 he author(s) 1 Published with open access at wwwusciporg Abstract Recently, there has been much interest in studying linear second-order cone programming his paper uses the Kanzow-Kleinmichel function for the second-order cone complementarity problem, so the Karush-Kuhn- ucer optimality condition of the primal problem is changed into a system of equations We can apply the Newton method to the system of equations hen, the algorithm is proved to be globally convergent and locally quadratically convergent Keywords: Second-order cone programming; Jordan algebra; Smoothing function; Global convergence; Local quadratic convergence 1 Introduction Second-order cone programming (SOCP) can be dated from the classic Fermat-Weber problem in 17th century It can be applied in a wider range of fields including combinatorial optimization, engineering, logistics and transportation Steiner minimum tree and facility location problems are extensively used in combinatorial optimization, logistics and transportation, respectively hey are the extension of the Fermat problem Xue and Ye (1997) showed that both of them can be transformed into SOCP problems, and the primal-dual interior point method can be applied to solve them Lebret (1995) and Andersen et al (1998) showed that antenna array constraint problems can be translated into SOCP problems, which can be solved by the interior point method Goldfarb and Iyenger (1997) introduced nondeterministic structure of maret parameters and showed the corresponding robust portfolio optimization problem of nondeterministic structure can be formulated as SOCP problems Hence, it can reduce the difficulty of calculation With the exception of these problems, SOCP can be applied *Corresponding e-mail: guizhq_45@16com Department of Computational Science & Mathematics, Guilin University of Electronic echnology, 9 Guilin, 5414, P R China

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 in many other fields, such as image restoration problems, robust problems and grasping force optimization problems he standard form of the linear SOCP problem is given by (P) min c x s t Ax b, x K, multi-class classification (11) Where A ( A1, A,, A r ), the second-order cone: mn j n Aj R, c j j R, j 1,,, r, b R m, x j j K, j 1,,, r K j n j 1 K { x ( x ; x) R R x x }, j where refers to the standard Euclidean norm, n is the dimension of K j j For the cone K, let bd K { x K x x, x }, denote the boundary of K Also, let int K { x K x x } denote the interior of K Set n n1 n nr, K K 1 K K r he dual problem of the primal problem (11) is given by (D) max Τ b y s t A y z c z K,, (1) m where z j K j, j = 1,,, r is the slac variable, and y R, z = (z 1,, z r ) K Jordan algebra is associated with LP, SDP and SOCP Faraut and Korany (1994) constructed the theory It is well nown that complementarity conditions play an important role in the KK conditions, and the complementarity function is the ey to solve the complementarity problem Obviously, the definition of the complementarity function for linear or nonlinear complementarity problems (LCP/NCP) is completely different from the one for second-order cone complementarity problems (SOCCP), because the latter is a vector-valued function By means of the Jordan-algebra technique, Fuushima et al (1) extended several smoothing complementarity functions to the setting of SOCCP and studied differential and Lipschitzian properties of them Motivated by their wor, Liu et al (7) and Chi and Liu (9) extended the Chen-Harer-Kanzow-Smale (CHKS) smoothing function and the Fischer-Burmeister (FB) function to the setting of SOCCP, respectively In this paper, firstly, we give the KK optimality condition for the problem (11) With the help of the Jordan algebra technique we introduce the Kanzow-Kleinmichel function for SOCCP, and show that it is continuously differentiable everywhere he smoothing function includes a parameter, and the smoothing function changes over the different When =, it is an F-B function; when =-, it is a CHKS function hen, the KK optimality condition is transformed into a system of equations We can apply the Newton method to the system of equations In order to guarantee fast convergence of the algorithm, we introduce the central path, which made every iteration 3

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 descended sufficiently in the Newton direction, and the iterative point does not go beyond the central area Hence, the algorithm is globally convergent and locally quadratically convergent Preliminaries 1 Jordan Algebra For any given p=( p ; p) R R and q=( q; q) R R,we define the following multiplication: p q=( p q; p q+ q p) he vector =(1;) R R is the unique identity element of K For any vector x R, the arrow matrix is defined as: x Arw( x)= x x xi Obviously, Arw( x ) is symmetric positive definite if and only if it holds that x K It is easily verified that Arw( x)= x s, s R Next, we introduce the spectral factorization of vectors in Let x=( x ; x) R R, then it can be decomposed as: R associated with K x= u + u, 1 1 where = +() i i x x, i=1, are the spectral values, and 1 (1;() i x ), if x, x u i = 1 (1;() i v), if x =, (1) are the spectral vectors of x, where v is a vector on R with v =1 When x K, the spectral values of x are nonnegative, so, we can define x= u + u, 1 1 x = 1u1 + u Central Path he primal-dual problems (11) and (1) are equivalent to the following KK optimality condition: 31

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 Ax b, A y z c, x z, x K, z K, () A parameter > is introduced to give the following disturbance of the optimality condition () Ax b, A y z c, x z e, i=1,,, r, i i i x K, z K, i i i i (3) where =(1;) R R n i i he trajectory of points ( x, y, z) satisfying (3) for > is called the central path We introduce the following smoothing function: ( p, q, )= p+ q- p + q + p q+(- ) [-,) (4) Clearly, there are several interesting properties of function ( p, q, ) PROPOSIION 1 1) ( p, q, )= if and only if p K, q K and p q= ) Let >, ( p, q, )= if and only if p int K, q int K and p q= 3) For ang p, q R and ang > >,we have - ( - ) ( p, q, )- ( p, q, ), - ( p, q, )- ( p, q, ) K K (5) LEMMA Let t = p + q + p q, ( p, q, )= p+ q- ( t, ) is continuously differentiable, and the gradient is ( t, )= t +(- ), then the smoothing function I -( Arw( p)+ Arw( q)) Arw( t) ( p, q, )= I -( Arw( q)+ Arw( p)) Arw( t) - ( t, ) (6) Proof Since the vector t can be decomposed as t= c + c, 1 1 3

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 where 1, and c1, c are the spectral values and the associated spectral vectors of t, we have ( t, ) = +(- ) c + +(- ) c 1 1 According to Lemma 1 (Kanzow and Kleinmichel, 1998), we get that 1 1 1 1 1 1 +(- ) +(- ) [ +(- ) ] [ +(- ) ] (- ) (- ) ( t, ) = c + c = c + c From Lemma 5 (Alizadel and Goldfarb, 3), it is easy to now that p q ( p, q, ) = I-( Arw( p)+ Arw( q)) Arw( t), ( p, q, ) = I-( Arw( q)+ Arw( p)) Arw( t) he conclusion holds 3 Algorithm and Properties he vectors x, z can be partitioned as follows: x x1 x r = (,, ), z = ( z1,, z r ), ni where x, z R, i = 1,, r, so ( x, z, ) can also be partitioned as i i ( x, z, ) = ( ( x, z, ); ; ( x, z, )) x 1 1 1 r r r n m n m n n For the mapping F : R R R R R R, we denote Ax-b F ( x, y, z) = A y+ z-c ( x, z, ) (31) It is easy to now that, when >, ( x, y, z ) satisfies the central path if and only if ( x, y, z ) is the solution of the nonlinear equations F ( x, y, z) = In particular, if =,it is the solution of the optimality condition () Obviously, when, the smoothness of the function F is affected In this paper, we viewed the parameter as a variable to overcome the influence n m n m n n Let F : R R R R R R R R, we define 33

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 F( x, y, z, ) = Ax-b A y+ z-c, (3) ( x, z, ) e where e is the base of natural logarithms, then function F( x, y, z, ) = is equivalent to the optimality condition () Now we can define neighborhoods around the central path N( ) = ( x, y, z, ) Ax b, A y+ z= c, ( x, z, ) We ensure that and the iteration point can be landed in the field in every iteration Now, the algorithm for the solution of the problem (3) can be stated as follows Algorithm A Step Initialization and date: Given a starting point u with Ax b, A y + z = c, > Set such that Choose (, 1), >, > Set = ( x, z, ) Step 1 Estimates step: Solve the following linear system F( x, y, z, ) ( x ; y ; z ; ) = - F( x, y, z, ) (33) We get the solution ( x ; y ; z ; ) If ( x + x, z + z, ) =, SOP! Otherwise, if ( x + x, z + z, ), set (, x y, z ) = ( x, y, z ), =, =1, h or set =, h is the integer that satisfies the following inequality i i ( x + x, z + z, ), i = 1,, h, (34) ( x + x, z + z, ) > h+1 h+1 34

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 Let =, and ( x, y, z ), if h=, ( x, y, z ) = ( +, +, + x x y y z z ), if h> Step Correction step: Let ( x ; y ; z ; ) be a solution of the following linear system (,,, ) ( ; ; ; )= - (,,, ) + [; (1- )e F x y z x y z F x y z ] (35) Step 3 he linear search: Set =max i i=, 1,,, which satisfies the following inequality ( x + x, z + z, + ) (1- ) (36) Step 4 Update: +1 +1 +1 Let ( x, y, z ) = ( x +, + x y y, z + z ), +1= (1- ) and = +1 Go bac to step 1 H 31 he matrix A has full row ran he next theorem is given to ensure that equations (33) and (34) are consistent HEOREM 31 he smoothing function F is continuously differentiable in R n + m+1 If the condition H n m n 31 holds, then the Jacobian of F given by (3) is invertible for all ( x, y, z, ) R R R R Proof he Jacobian of F is given by F( x, y, z, ) = A A I, (37) ' H ( x) G( x) - e 35

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 where H x x z I Arw w Arw x Arw z G x x z I Arw w Arw z Arw x ( )= x(,, ) - ( ) ( ( )+ ( )), ( )= z(,, ) = - ( ) ( ( )+ ( )), w x x z = +z + ( )+(- ), [-,), ' - =- ( x, z, ) he matrix F( x, y, z, ) is invertible if and only if the linear system F( x, y, z, ) ( x; y; z; ) =, or, equivalently, Ax (38) A y+ z = (39) ' H ( x) x + G( x) z- = (31) e = (311) has the only zero solution As e, we obtain from (311) So (31) is equivalent to H ( x) x + G( x) z =, (31) namely, [ I- Arw( w) ( Arw( x)+ Arw( z))] x + [ I - Arw( w) ( Arw( z)+ Arw( x))] z = he last equation is multiplied by Arw( w ), and we can get [ Arw( w)-( Arw( x)+ Arw( z))] x + [ Arw( w)-( Arw( z)+ Arw( x))] z = Since Arw( w)-( Arw( z)+ Arw( x)), we multiply the formula by 36

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 1 x [ Arw( w)-( Arw( z)+ Arw( x))], and gain that 1 x [ Arw( w)-( Arw( z)+ Arw( x))] [ Arw( w)-( Arw( x)+ Arw( z))] x x z By (38) and (39), we receive that x z, so the above equation is equivalent to 1 x [ Arw( w)-( Arw( z)+ Arw( x))] [ Arw( w)-( Arw( x)+ Arw( z))] x Setting 1 g = [ Arw( w)-( Arw( z)+ Arw( x))] x, we have g [ Arw( w)-( Arw( x)+ Arw( z))][ Arw( w)-( Arw( z)+ Arw( x))] g = Due to the fact that [ Arw( w)-( Arw( x)+ Arw( z))][ Arw( w)-( Arw( z)+ Arw( x))], we obtain g = hen, x = [ Arw( w)-( Arw( z)+ Arw( x))] g = From (31), we obtain z = Since the matrix A has full row ran, then y = Given all that, the equation F( x, y, z, ) ( x; y; z; ) = has the only solution ( x; y; z; ) =, so F( x, y, z, ) is invertible his completes the proof HEOREM 3 Suppose that H31 holds, then the Algorithm A is well-defined, and all the iterations fall in the field N( ) Proof Firstly, we consider Step 1 Since ( x, y, z) is continuously differentiable, the inner loops can be terminated in a finite number of iterations herefore, Step 1 is well-defined, and (, x y, z, ) satisfies the condition of field, ie ( x, y, z, ) N( ) 37

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 We suppose that the calculation of cannot be stopped in a finite number of iterations in Step 3 hen for any t N, we have ( +, +, + x x z z ) >(1- ) >(1- ) ( x, z, ) namely, ( +, +, + )- (, x x z z, ) > - (, x z x z, ) Set t, we have - ( x, z, ) >- ( x, z, ) Since (, 1), this shows that ( x, z, ) = Hence, we obtain t lim ( + t, +, t x x z z + t ) =, lim(1- ) = t t Obviously, this is contradictory with the assumptions, so Step is well-defined From the update rule, we have (,, x y z, ) N( ), ( x, y, z, ) N( ) 4 he Convergence of the Algorithm In this section, we will show the global and locally quadratic convergence of the Algorithm A HEOREM 41 If ( x, y, z) is an accumulation point of sequence( x, y, z ) generated by the Algorithm A, then lim = Proof By the criterion of the Algorithm A, the sequence is monotone decreasing and with bounded below herefore, is said to be convergent to a nonnegative constant We prove = by contradiction Assume that > holds By the iterative rule of the Algorithm A, for all sufficiently large, we have (, x y, z )=( x, y, z ), =, = 1 (41) 38

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 hen, considering the iterative rule in Step, we have = (1- i ) i=1 Since >, one has lim = By the linear search criterion (36), we now that not satisfy (36) for all N sufficiently large Consequently, we obtain = does lim ( x + x, z + z, + ) > (1- ) (4) t Assume ( x, y, z) is the accumulation point of sequence( x, y, z ) Suppose sequence ( x, y, z ) ( x, y, z) Since >, the sequence ( x, y, z, ) ( x, y, z, ), and ( x, y, z, ) is the solution of the following linear systems F( x, y, z, ) ( x, y, z, ) = - F( x, y, z, ) [; (1 ) e ] Since, from (4), we obtain that ( x, y, z) (43) In addition, combining with (41) and (4), for any large enough N, we have ( x + x, z + z, + ) > (1- ) =(1- ) >(1- ) ( x, z, ) By the above expression, we have ( +, +, + x x z z ) - ( x, z, ) - ( x, z, ) Setting, it holds that - ( x, z, ) - ( x, z, ) 39

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 Since (,1), then ( x, z, ) = According to (43), we obtain = he proof is finished HEOROM 4 Each accumulation of the sequence ( x, y, z ) generated by the Algorithm A is the solution of the optimality condition () Proof Assume ( x, y, z) is an accumulation of( x, y, z ), and( x, y, z ) ( x, y, z) It follows from heorem 41 that lim = Since ( x, y, z ) N( ), then we have ( x, z, ) = lim ( x, y, ) lim = L L From Proposition 1, we now that ( x, y, z) is the solution of the optimality condition() herefore, the result holds In what follows, we discuss the locally quadratic convergence of the Algorithm A It needs the following condition H 41 Assume the solution of optimality condition () satisfies the condition of primal and dual nondegeneracy and strict complementarity H 4 For the sequence ( x, y, z ) generated by the Algorithm A, it has one or more accumulations, moreover ( x, y, z, ) = O( ) L Holds, where ( x, y, z, ) is given by linear search (33) PROPOSIION 43 Suppose H41 holds, then H4 is correct Proof If H41 holds, F is nonsingular, Furthermore, lim[ F( x, y, z, ) ] 1 is bounded From the transformation of (33), we obtain ( ; ; ; ) = - (,,, )( (,,, ) ) 1 x y z F x y z F x y z Apparently, it is sufficient to prove F( x, y, z, ) = O( ) In fact, we have 4

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 F( x, y, z, ) ( x, z, ) +(e ) ( x, z, ) +(e ) + + o( ) = O( ) Hence, ( x, y, z, ) = O( ) holds he claim holds HEOREM 44 Suppose that H4 holds and > - Let be Lipschitz continuous in the field of ( x, z, ), then for any sufficiently large, we have +1 = O( ) Proof We first prove ( x + x, z + z, ) = O( ) Using (33), it implies enough, we have Since is locally Lipschitz continuous for any large - = e = - ( x + x, z + z, ) = ( x + x, z + z, + ) = ( x + x, z + z, + )- ( x, z, )- ( x, z, )( x ; z ; ) O( ( x ; z ; ) ) = O( ) From (34) we now, for any sufficiently large, that < ( x + x, z + z, ) ( x + x, z + z, ) + ( x + x, z + z, ) ( x + x, z + z, ) ( x + x, z + z, ) + - herefore, we have ( - - ) ( x + x, z + z, ), hence = O( ) Using the Algorithm A and the definition of +1 and, we obtain O +1= (1- ) = (1- ) = ( ) he proof is finished 41

Zhuqing Gui, Zhibin Zhu, and Huali Zhu / Journal of Advanced Computing (13) 1: 9-4 Acnowledgement his wor was supported in part by NNSF (No116111) of China, Guangxi Fund for Distinguished Young Scholars (1GXNSFFA63) and Innovative project of Guangxi graduate education (11159571M6) References Alizadeh, F, Goldfarb, D,3 Second-order cone programming Mathematical Programming Math Program Ser B 95, 3-51 http://dxdoiorg/117/s117--339-5 Andersen, K D, Christiansen, E Overton, M L, 1998 Limit Analysis by Minimizing a Sum of Norms SIAM J Scientific Computing, 19(3), 1466 http://dxdoiorg/11137/s164875947533 Chen, X D, Sun, D F, Sun, J, 3 Comlementarity funtions and numerical experiments on smoothing Newton methods for second-order-cone complementarity problems Computational Optimization and Applications 5(1-3), 39-56 http://dxdoiorg/113/a:1996819381 Chi, X N, Liu, S Y, 9 A predictop-corrector smoothing method for second order cone programming Journal of System Science and Mathematical Science 9(4),547-554 Faraut U, Koranyi, A, 1994 Analysis on Symmetric Cone Oxford Mathematical Monographs, New Yor, Oxford University Press Fuushima, M, Luo, Z Q, seng, P, 1 Smoothing funtion for second-order-cone complementarity problems SIAM Journal on Optimization 1(), 436-46 http://dxdoiorg/11137/s1563438365 Kanzow, C, Kleinmichel, H, 1998 A new class of semismooth Newton method for nonlinear complementarity problems Computational Optimization and Applications 1, 7-51 http://dxdoiorg/113/a:1644918464 Lebret, H, 1995 Antenna pattern synthesis throuhg convex optimization Proceedings of the SPIE Advanced Siginal Processing Algorithms 189 Lebret, H, Boyd, S, 1997 Antenna array Pattern Synthesis via convex optimization SIAM J Matrix Anal APPL 45(3), 56-53 Liu, Y J, Zhang L W, Wang, Y H, 7 Convergence Properties of a Smoothing Method for Linear Secondorder Cone Programming Advances in Mathematics, 36(4), 491-5 Sun, D, F, Sun, J, 5 Strong semismoothness of Fischer-Burmeister SDC and SOC complementarity functions Mathematical programming 13(3), 575-581 http://dxdoiorg/117/s117-5-577-4 Xue, G, Ye, Y Y, 1997 An efficient algorithm for minimizing a sum of Euclidean norms with application SIAM Journal on Optimization 7, 11736 http://dxdoiorg/11137/s15634958836 Yuan, Y X, Sun, W Y, 1997 Optimization theory and methods Beijing, Science Press 4