ON THE GENERALIZED FISCHER-BURMEISTER MERIT FUNCTION FOR THE SECOND-ORDER CONE COMPLEMENTARITY PROBLEM

Size: px
Start display at page:

Download "ON THE GENERALIZED FISCHER-BURMEISTER MERIT FUNCTION FOR THE SECOND-ORDER CONE COMPLEMENTARITY PROBLEM"

Transcription

1 MATHEMATICS OF COMPUTATION Volume 83 Number 87 May 04 Pages 43 7 S ( Article electronically published on July 5 03 ON THE GENERALIZED FISCHER-BURMEISTER MERIT FUNCTION FOR THE SECOND-ORDER CONE COMPLEMENTARITY PROBLEM SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN Abstract. It has been an open uestion whether the family of merit functions ψ p (p> the generalized Fischer-Burmeister (FB merit function associated to the second-order cone is smooth or not. In this paper we answer it partly and show that ψ p is smooth for p ( 4 and we provide the condition for its coerciveness. Numerical results are reported to illustrate the influence of p on the performance of the merit function method based on ψ p.. Introduction Given two continuously differentiable mappings F G: R n R n we consider the second-order cone complementarity problem (SOCCP: to seek a ζ R n such that ( F (ζ K G(ζ K F (ζg(ζ =0 where denotes the Euclidean inner product that induces the norm andk is the Cartesian product of a group of second-order cones (SOCs. In other words ( K = K n K n K n m where n...n m n + + n m = n andk n i is the SOC in R n i defined by K n i := { (x i x i R R n i x i x i }. As an extension of the nonlinear complementarity problem (NCP over the nonnegative orthant cone R n + (see 3] the SOCCP has important applications in engineering problems ] and robust Nash euilibria 9]. In particular it also Received by the editor August 5 00 and in revised form April 8 0 and August Mathematics Subject Classification. Primary 90C33 90C5. Key words and phrases. Second-order cones complementarity problem generalized FB merit function. The first author s work was supported by National Young Natural Science Foundation (No and the Fundamental Research Funds for the Central Universities (SCUT. The second author s work was supported by Basic Science Research Program through NRF Grant No The third author s work was supported by the National Research Foundation of Korea (NRF grant funded by the Korea government (MEST (No Corresponding author: The fourth author is a member of the Mathematics Division National Center for Theoretical Sciences Taipei Office. The fourth author s work was supported by National Science Council of Taiwan. 43 c 03 American Mathematical Society

2 44 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN arises from the suitable reformulation for the Karush-Kuhn-Tucker (KKT optimality conditions of the nonlinear second-order cone programming (SOCP: minimize f(x (3 subject to Ax = b x K where f : R n R is a twice continuously differentiable function A is an m n real matrix with full row rank and b R m. It is well known that the SOCP has very wide applications in engineering design control management science and so on; see 5] and the references therein. In the past several years there have various methods proposed for SOCPs and SOCCPs. They include the interior-point methods ] the smoothing Newton methods 5 8] the semismooth Newton methods 34] and the merit function methods 4 ]. The merit function method aims to seek a smooth (continuously differentiable function ψ : R n R n R + satisfying (4 ψ(x y =0 x K y K x y =0 so that the SOCCP can be reformulated as an unconstrained minimization problem (5 min Ψ(ζ :=ψ(f (ζg(ζ ζ Rn in the sense that ζ is a solution to ( if and only if it solves (5 with zero optimal value. We call such ψ a merit function associated with K. Note that the smooth merit functions also play a key role in the globalization of semismooth and smoothing Newton methods. This paper is concerned with the generalized Fischer-Burmeister (FB merit function (6 ψ p (x y := φ p(x y where p is a fixed real number from ( + and φ p : R n R n R n is defined by (7 φ p (x y := p x p + y p (x + y with x p being the vector-valued function (or Löwner function associated with t p (t R (see Section for the definition. Clearly when p =ψ p reduces to the FB merit function ψ FB (x y := φ FB (x y where φ FB : R n R n R n is the FB SOC complementarity function defined by φ FB (x y := x + y (x + y with x = x x being the Jordan product of x with itself and x with x K being the uniue vector such that x x = x. The function ψ FB is shown to be a smooth merit function with globally Lipschitz continuous derivative 0 ]. Such a desirable property is also proved for the FB matrix-valued merit function 30 3]. In this paper we study the favorable properties of ψ p. The motivations for us to study this family of merit functions are as follows. In the setting of NCPs ψ p is shown to share all favorable properties as the FB merit function holds (see 9 8] and the performance profile in 5] indicates that the semismooth Newton method based on φ p with a smaller p has better performance than a larger p. Thus it is very natural to ask whether ψ p has the desirable properties of the FB merit function or not in the setting of SOCCPs and what performance the merit function method

3 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 45 and the Newton-type methods based on φ p display with respect to p. Thisworkis the first step in resolving these uestions. Although there are some papers 4 6 ] to study the smoothness of merit functions for the SOCCPs the analysis techniues therein are not applicable for the general function ψ p. We wish that the analysis techniue of this paper would be helpful in handling general Löwner operators. The main contribution of this paper is to show that ψ p with p ( 4 is a smooth merit function associated with K and to establish the coerciveness of Ψ p (ζ := ψ p (F (ζg(ζ under the uniform Jordan P -property and the linear growth of F. Throughout this paper we will focus on the case of K = K n and all the analysis can be carried over to the general case where K is the Cartesian product of K n i.to this end for any given x R n with n> we write x =(x x wherex is the first component of x andx is the column vector consisting of the rest components of x; and let x = x x whenever x 0 and otherwise let x be an arbitrary vector in R n with x =. WedenoteintK n bdk n and bd + K n by the interior the boundary and the boundary excluding the origin respectively of K n. For any x y R n x K n y means x y K n ;andx K n y means x y intk n. For a real symmetric matrix A wewritea 0 (respectively A 0 to mean that A is positive semidefinite (respectively positive definite. For a differentiable mapping F : R n R m F (x denotes the transposed Jacobian of F at x. For nonnegative α and β α = O(β meansα Cβ for some C>0independent of α and β. The notation I always represents an identity matrix of appropriate dimension.. Preliminaries The Jordan product of any two vectors x and y associated with K n (see 4] is defined as x y := ( x y y x + x y. The Jordan product unlike scalar or matrix multiplication is not associative which is a main source of complication in the analysis of SOCCP. The identity element under this product is e = ( T R n. For any given x R n define L x : R n R n by ] x x L x y := T y = x y y R n. x x I Recall from 4] that each x R n has a spectral factorization associated with K n : (8 x = λ (xu ( x + λ (xu ( x where λ i (x andu (i x for i = are the spectral values of x and the corresponding spectral vectors respectively defined by λ i (x :=x +( i x and u (i x := ( ( i (9 x. The factorization is uniue when x 0. The following lemma states the relation between the spectral factorization of x and the eigenvalue decomposition of L x. Lemma. (4 5]. For any given x R n letλ (xλ (x be the spectral values of x andletu ( x u ( x be the corresponding spectral vectors. Then we have L x = U x diag (λ (xx...x λ (x Ux T

4 46 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN with U x = u ( x u (3 u (n u ( x ] R n n beings an orthogonal matrix where u (i =(0 u i for i =3...n with u 3...u n being any unit vectors to span thelinearsubspaceorthogonaltox. By Lemma. clearly L x 0iff(ifandonlyifx K n 0 L x 0iffx K n 0 and L x is invertible iff x 0anddet(x :=x x 0. Also if L x is invertible L x = x x T ] (0 det(x det(x x x I + x x x T. Given a scalar function g : R R define a vector function g soc : R n R n by ( g soc (x :=g(λ (xu ( x + g(λ (xu ( x. If g is defined on a subset of R theng soc is defined on the corresponding subset of R n. The definition of g soc is unambiguous whether x 0orx =0. Inthis paper we often use the vector-valued functions associated with t p (t R and p t (t 0 respectively written as x p := λ (x p u ( x + λ (x p u ( x x R n p x := p λ (x u ( x + p λ (x u ( x x K n. The two functions show that φ p in(7iswelldefinedforanyx y R n. We next present four lemmas that will often be used in the subseuent analysis. Lemma. (3 4]. For any given 0 ρ ξ ρ K n η ρ when ξ K n η K n 0. Lemma.3. For any nonnegative real numbers a and b the following results hold: (a: (a + b ρ a ρ + b ρ if ρ> and the euality holds iff ab =0; (b: (a + b ρ a ρ + b ρ if 0 <ρ< and the euality holds iff ab =0. Proof. Without loss of generality we assume that a b and b>0. Consider the function h(t =(t + ρ (t ρ +(t 0. It is easy to verify that h is increasing on 0 + whenρ>. Hence h(a/b h(0 = 0 i.e. (a + b ρ a ρ + b ρ. Also h(a/b =h(0 if and only if a/b =0. Thatis(a + b ρ = a ρ + b ρ if and only if ab =0. Thisprovespart(a. Notethath is decreasing on 0 + when0<ρ< and a similar argument leads to part (b. Lemma.4. For any ξη K n ifξ + η bdk n then one of the following cases must hold: (i ξ =0η bdk n ; (ii ξ bdk n η =0; (iii ξ = γη for some γ>0 with η bd + K n. Proof. From ξη K n and ξ + η bdk n we immediately obtain that ξ + η ξ + η = ξ + η ξ + η. This shows that ξ =0orη =0orξ = γη 0forsomeγ>0. Substituting ξ =0orη =0orξ = γη into ξ + η = ξ + η yields the result. To close this section we show that φ p in (7 is an SOC complementarity function and then its suared norm ψ p is a merit function associated with K n. Lemma.5. Let φ p be defined by (7. Then for any x y R n it holds that φ p (x y =0 x K n y K n x y =0.

5 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 47 Proof.. From 6 Proposition 6] there exists a Jordan frame { u ( u (} such that x = λ u ( + λ u ( and y = μ u ( + μ u ( with λ i μ i 0fori =. Then (x + y p = (λ + μ p u ( +(λ + μ p u ( x p + y p = (λ p + μp u( +(λ p + μp u(. Since 0 = x y = λ μ + λ μ implies λ μ = λ μ = 0 from the last two eualities and Lemma.3(a we obtain (x + y p = x p + y p andthenφ p (x y =0.. Since φ p (x y =0wehavex = x p + y p y K n y y K n where the ineuality is due to Lemma.. Similarly we have y = p x p + y p x K n x x K n.nowfromφ p (x y =0wehave(x + y p = x p + y p andthen (λ (x + y p +(λ (x + y p =(λ (x p +(λ (x p +(λ (y p +(λ (y p. Noting that h(t =(t 0 + t p +(t 0 t p for a fixed t 0 0isincreasingon0t 0 ] we also have λ (x + y] p +λ (x + y] p (x + y x + y p +(x + y + x y p ( =(λ (x+λ (y p +(λ (x+λ (y p (λ (x p +(λ (y p +(λ (x p +(λ (y p where the last ineuality is due to Lemma.3(a and x y K n. The last two euations imply that all the ineualities on the right-hand side of ( become eualities. Therefore (3 x + y = x y λ (xλ (y =0 λ (xλ (y =0. Assume that x 0andy 0. Sincex y K n from the eualities in (3 we get x = x y = y andx = γy for some γ <0 which implies x y =0. When x =0ory = 0 using the continuity of the inner product yields x y =0. 3. Differentiability of ψ p Unless otherwise stated in the rest of this paper we assume that p> with =( p andg soc is the vector-valued function associated with t p (t R i.e. g soc (x = x p. For any x y R n we define (4 w = w(x y := x p + y p and z = z(x y := p x p + y p. By definitions of x p and y p clearly w := w (x y = λ (x p + λ (x p + λ (y p + λ (y p w := w (x y = λ (x p λ (x p x + λ (y p λ (y p (5 y where x = x x if x 0 and otherwise x is an arbitrary vector in R n with x =andy has a similar definition. Noting that z(x y = p w(x y we have p λ (w+ p λ (w z = z (x y = p λ (w p λ (w (6 z = z (x y = w

6 48 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN where w = w w if w 0 and otherwise w is an arbitrary vector in R n with w =. To study the differentiability of ψ p we need the following two crucial lemmas. The first one gives the properties of the points (x y satisfying w(x y bdk n and the second one provides a sufficient characterization for the continuously differentiable points of z(x y. Lemma 3.. For any (x y with w(x y bdk n we have the following eualities: w (x y = w (x y = p ( x p + y p (7 x = x y = y x y = x T y x y = y x. If in addition w (x y 0 the following eualities hold with w (x y = w (xy w (xy : (8 x T w (x y =x x w (x y =x y T w (x y =y y w (x y =y. Proof. Fix any (x y with w(x y bdk n.since x p y p K n applying Lemma.4 with ξ = x p and η = y p wehave x p bdk n and y p bdk n. This means that λ (x p λ (x p =0and λ (y p λ (y p =0. Sox = x and y = y. Substituting this into w (x y we readily obtain w (x y = p ( x p + y p. To prove other eualities in (7 and (8 we first consider the case where x + x =0andy y =0withx 0andy 0. Under this case w = λ (x p + λ (y p = λ (x p x x λ (y p y y = w which implies that x T y = x y = x y. Together with x = x and y = y wehavethatx y = y x. From the definition of w it follows that x T w = λ (x p x + λ (y p x y y =p ( x p + y p x = w x x w = λ (x p x x x + λ (y p y x y =p ( x p + y p x = w x. Similarly we also have y T w = w y and y T w = w y. The above arguments show that euations (7 and (8 hold under the case where x = x y = y. Using the same arguments we can prove that (7 and (8 hold under any one of the following cases: x = x y = y ;orx = x y = y ;or x = x y = y. Lemma 3.. z(x y is continuously differentiable at (x y with w(x y intk n and x z(x y = g soc (x g soc (z and y z(x y = g soc (y g soc (z where g soc (z =(p w I if w =0 and otherwise g soc (z = p + λ (w λ (w w λ (w w λ (w w T λ (w wt λ (w p(i w w T a(z + w w T λ (w + w w T λ (w Proof. Since t p (t R and p t (t >0 are continuously differentiable by 5 Proposition 5.] or 7 Proposition 5] the functions g soc (x and x are continuously differentiable in R n and intk n respectively. This implies the first part of this.

7 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 49 lemma. A simple calculation gives the expression of z(x y. By the formula in 5 Proposition 5.] g soc p sign(x x p I ] if x =0; (x = b(x c(xx T (9 c(xx a(xi +(b(x a(xx x T if x 0 where x = x x a(x = λ (x p λ (x p λ (x λ (x b(x = p sign(λ (x λ (x p +sign(λ (x λ (x p ] (0 c(x = p sign(λ (x λ (x p sign(λ (x λ (x p ]. We next derive the formula of g soc (z.whenw =0wehaveλ (w =λ (w = w > 0 which by (6 implies z = p w and z = 0. From formula (9 it then follows that g soc (z =p z p I = p w I. Conseuently g soc (z = p w I. When w 0since p λ (w > p λ (w we have z 0andz = z z = w by (6. Using the expression of g soc (z it is easy to verify that b(z +c(z and b(z c(z are the eigenvalues of g soc (z with ( w and( w being the corresponding eigenvectors and a(z is the eigenvalue of multiplicity n with corresponding eigenvectors of the form (0 v i where v...v n are any unit vectors in R n that span the subspace orthogonal to w. Hence g soc (z =Udiag (b(z c(za(z...a(zb(z+c(z U T where U =u v v n u ] R n n is an orthogonal matrix with ( ( ( 0 u = u w = v w i = for i =...n. vi By this we know that g soc (z has the expression given as in the lemma. Now we are in a position to prove the following main result of this section. Proposition 3.. The function ψ p for p ( 4 is differentiable everywhere. Also for any given x y R n ifw(x y =0then x ψ p (x y = y ψ p (x y =0; if w(x y intk n then x ψ p (x y = ( g soc (x g soc (z I φ p (x y ( y ψ p (x y = ( g soc (y g soc (z I φ p (x y; and if w(x y bd + K n then ( sign(x x p x ψ p (x y = x p + y φ p (x y p ( ( sign(y y p y ψ p (x y = x p + y φ p (x y. p Proof. Fix any (x y R n R n.ifw(x y intk n the result is implied by Lemma 3. since φ p (x y =z(x y (x + y. In fact in this case ψ p is continuously differentiable at (x y. Hence it suffices to consider the cases w(x y =0and w(x y bd + K n. In the following arguments x and y are arbitrary vectors in

8 50 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN R n andμ (x y μ (x y are the spectral values of w(x y with ξ ( ξ ( R n being the corresponding spectral vectors. Case. w(x y = 0. Note that (x y =(0 0 in this case. Hence we only need to prove for any x y R n (3 ψ p (x y ψ p (0 0 = z(x y (x + y = O( (x y which shows that ψ p is differentiable at (0 0 with x ψ p (0 0 = y ψ p (0 0 = 0. Indeed z(x y (x + y = p μ (x y ξ ( + p μ (x y ξ ( (x + y (4 p μ (x y + x + y. From the definition of w (x y andw (x y it is easy to obtain that μ (x y =w (x y +w (x y λ (x p + λ (x p + λ (y p + λ (y p. Using the nondecreasing property of p t and Lemma.3(b it then follows that μ (x y ( λ (x p + λ (x p + λ (y p + λ (y p /p p λ (x + λ (x + λ (y + λ (y ( x + y. This together with (4 implies that euation (3 holds. Case. w(x y bd + K n.noww (x y = w (x y 0andoneofx and y is nonzero by (8. We proceed with the arguments in three steps as shown below. Step. We prove that w (x y and w (x y are p times differentiable at (x y =(x y where p denotes the maximum integer not greater than p. Since one of x and y is nonzero we prove this result by considering three possible cases: (i x 0y 0; (ii x =0y 0; and (iii x 0y = 0. For case (i since x x y y λ (x λ (x λ (y and λ (y are infinite times differentiable at (x y and t p is p times continuously differentiable in R it follows that w (x y and w (x y are p times differentiable at (x y. Now assume that case (ii is satisfied. From the arguments in case (i we know that λ (y p + λ (y p and λ (y p λ (y p y y are p times differentiable at (x y. In addition since λ i (x p p x p for i = and x = 0 in this case we have that λ (x p + λ (x p and ( λ (x p λ (x p x are p times differentiable at x with the first p order derivatives being zero. Thus w (x y andw (x y are p times differentiable at (x y. By the symmetry of x y in w(x y and the arguments in case (ii the result also holds for case (iii. Step. We show that ψ p is differentiable at (x y. By the definition of ψ p wehave ψ p (x y = x + y + z(x y z(x y x + y. Since x + y is differentiable it suffices to argue that the last two terms on the right-hand side are differentiable at (x y. By formulas (8-(9 it is not hard to

9 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 5 calculate that (5 (6 z(x y =(μ (x y p +(μ (x y p z(x y x + y = p ( μ (x y x + y + (w (x y T (x + y w (x y + p ( μ (x y x + y (w (x y T (x + y w (x y. Since w (x y 0μ (x y =λ (w > 0 and w (x y andw (x y are differentiable at (x y by Step we have that (μ (x y p and the first term on the right-hand side of (6 is differentiable at (x y. Thus it suffices to prove that (μ (x y p and the last term on the right-hand side of (6 are differentiable at (x y. We first argue that (μ (x y p is differentiable at (x y. Since w (x y 0 and w (x y andw (x y are p times differentiable at (x y bystep the function μ (x y is p times differentiable at (x y. When p< by the mean-value theorem and μ (x y = λ (w = 0 it follows that μ (x y = O( x x + y y for any (x y sufficiently close to (x y and therefore (μ (x y p = O( x x + y y p ]. This shows that (μ (x y p is differentiable at (x y with zero derivative. When p μ (x y is infinite times differentiable at (x y and its first derivative euals zero by the result in the Appendix. From the second-order Taylor expansion of μ (x y at(x y it follows that (μ (x y p = O( x x + y y 4 p ]. This implies that (μ (x y p is differentiable at (x y with zero gradient when p<4. Thus we prove that (μ (x y p is differentiable at (x y with zero gradient when p ( 4. We next consider the last term on the right-hand side of (6. Observe that x + y (w (x y T (x + y w (x y is differentiable at (x y and its function value at (x y euals zero by (8. Hence this term is O( x x + y y which along with μ (x y = O( x x + y y means that the last term of (6 is O(( x x + y y + p =o( x x + y y. This shows that the last term of (6 is differentiable at (x y with zero derivative. Step 3. We derive the formula of x ψ p (x y. From Step we see that ψ p (x y euals the difference between the gradient of (μ (x y p + x + y and that of the first term on the right-hand side of (6 evaluated at (x y. By the Appendix the gradients of (μ (x y /p and (μ (x y /p with respect to x evaluated at (x y =(x y are ( x (μ (x y /p (x y =(xy =(λ (w p p sign(x x p (7 w ( x (μ (x y /p (x y =(xy =(λ (w p p sign(x x p (8. w

10 5 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN By the product and uotient rules for differentiation the gradient of x + y + (w (x y T (x +y w (x y with respect to x evaluated at (x y =(x y works out to be ( x + y w w w T ( (x + y = w w ( + w x w (x y (x y =(xy where the euality uses (8. Along with (7 the gradient of the first term on the right-hand side of (6 with respect to x evaluated at (x y =(x y is (9 (λ (w p (x + y p sign(x x p ( w +(λ (w p (. w In addition the gradient of x + y with respect to x evaluated at (x y = (x y is (x + y. Together with euations (8-(9 we obtain that x ψ p (x y =(x + y+(λ (w p p sign(x x p ( w (λ (w p (x + y p sign(x x p ( w (λ (w p (. w Since λ (w = 0 from (6 it follows that φ p (x y =z(x y (x + y = ( (λ (w p (x + y. w Combining the last two euations and using x w = x and y w = y weget x ψ p (x y =(λ (w p p sign(x x p (φ p (x y+(x + y (λ (w p p sign(x x p (x + y φ p (x y sign(x x p ] = ( x p + y p φ / p (x y where the last euality is from λ (w =w = p ( x p + y p. This proves the first euality in (. By the symmetry of x and y in ψ p the second euality in ( also holds. From the arguments in Step of Case we see that ψ p will be differentiable for p 4 if the first p -order derivatives of μ (x y =w (x y w (x y evaluated at (x y =(x y are eual to zero. We are not clear whether this holds or not. 4. Smoothness of ψ p In the last section ψ p for p ( 4 is proved to be differentiable everywhere. A natural uestion is whether ψ p is continuous or not. In this section we will provide an affirmative answer to it. For this purpose we first establish three technical lemmas which hold for all p>. Lemma 4.. There exists a constant c such that for all (x y with w(x y intk n (30 L x p L F z c p and L y p L F z c p where c is independent of x and y and F means the Frobenius norm.

11 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 53 Proof. Due to the symmetry of x and y in z(x y it suffices to prove the first ineuality. To this end we first prove that for any (x y with w(x y intk n (3 0 λ ( L x p L z p where for a matrix A R n n λ(a R n denotes the vector of eigenvalues of A and means a vector with all components being. Indeed since z K n 0 and x p K n 0 we have L z 0andL x p 0. Applying 0 Theorem 7.6.3] with A = L z and B = L p x p yields that λ(l z L p x p 0 and then λ(l x p L z 0. In addition since z p p K n x p from Lemma. it follows that (z p p p K n ( x p p p i.e. z p K n x p. Then L z p L x p 0. Applying the result of Exercise 7 in 0 p. 468] with A = L z p and B = L x p wehave that λ ( L ( z L p x p. Conseuently λ L x p L z. Together with p λ(l x p L z 0 we prove that (3 holds. p Next we prove that there exists a constant c > 0 such that for all (x y satisfying w(x y intk n L x p L z p F c where c is independent of x and y. Suppose on the contrary that such c does not exist. Then there exists a seuence {(x k y k } R n R n with w(x k y k intk n such that L xk p L (z k p F is unbounded. We assume (taking a subseuence if necessary that lim k L x k p L (z k p F =+. For each k leta k = L xk p may assume that and Bk = L (z k p. Subseuencing if necessary we lim k A k A k F = A and lim k B k B k F = B. In the following arguments for any A B R n n with all eigenvalues in R we let λ (A andλ (A be the vectors obtained by rearranging the coordinates of λ(a in the decreasing and increasing orders respectively. That is if λ (A = (λ (A...λ n(a then λ (A λ n(a. Similarly if λ (A =(λ (A... λ n(a then λ (A λ n(a. We write λ(a λ(b if l j= λ j (A l j= λ j (B for any l n and n j= λ j (A = n j= λ j (B. Since Ak 0and B k 0foreachk applying the result of 3 Problem III.6.4] gives that λ ( A k A k F ( ( B λ k A k B k B k λ F A k F B k F λ ( A k A k F ( B λ k B k F where denotes the componentwise product. Since lim k A k F B k F =+ taking the limit k + and using (3 and the continuity of λ( we get (3 λ (A λ (B 0 λ (A λ (B. Since A 0andB 0 each component of λ (A andλ (B is nonnegative and the first relation of (3 then implies λ (A λ (B =0. Note that for each

12 54 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN k all eigenvalues of A k and B k respectively are given as follows: λ (x k p λ (x k p + λ (x k p ]... λ (x k p + λ (x k p ] λ (x k p } {{ } n λ (z k ] p λ (z k ] p +λ (z k ]... p λ (z k ] p +λ (z k ] }{{ p λ (z } ] p n which by the positive homogeneousness of eigenvalue function means that λ (A λ (A = = λ n (A λ n(a 0 0 λ (B λ (B = = λ n (B λ n(b. Then from λ (A λ (B = 0 we deduce that λ (B =0andλ (A > 0. (If not we will have λ (A =0 which implies λ(a =0andthenA =0follows by the positive semidefiniteness of A. This contradicts the fact that A F =. Similarly we can deduce that λ n(b > 0andλ n(a = 0. Also either of λ (A and λ (B is zero. Without loss of generality we assume that λ (A =0. Thus the above arguments show that λ (A >λ (A =0= =0=λ n(a λ n(b λ n (B = = λ (B λ (B =0andλ n(b > 0. However from the second relation of (3 and the last two euations we have that n 0= λ j (A λ j (B =λ (A λ n(b +(n λ (A λ (B +λ n(a λ (B j= = λ (A λ n(b > 0 which is clearly impossible. Thus the constant c satisfies the reuirement. Lemma 4. generalizes the result of Lemma 4] for p = where it is achieved by direct computation but we here adopt a different proof techniue. Combining Lemma 4. with the expressions of L x p L z and L p y p L z wehavethe p following result. Lemma 4.. For any x y with w(x y intk n let x = x p and ỹ = y p. Then x +( i x T w λi (w = O( x +( i x w λi (w = O( (33 ỹ +( i ỹ T w λi (w = O( ỹ +( i ỹ w λi (w = O( for i = wherew = w (xy w (xy ando( denotes a uniformly bounded term. Proof. Fix any (x y satisfying w intk n.wewriteλ = λ (w andλ = λ (w for simplicity. From (5 we have λ w λ (x p + λ (x p. Note that ( λ (x p + λ (x p p ( p max( λ (x λ (x λ (x p + λ (x = x p p + p+

13 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 55 for p> and for <p λ (x p + λ (x p ( λ (x + λ (x p = p p x. Therefore (34 { λ p+ p x if p>; p p x if p ( ]. Since x = ( λ (x p + λ (x p and x = ( λ (x p λ (x p we have (35 x = ( λ (x p x ( λ (x p + λ (x p x p + λ (x p x p. Together with (34 we obtain the first two relations in (33 for i =. Notice that ( z p = λ + λ λ λ w = w. By formula (0 and x = x p we calculate that L x p L z euals p ( x + x T w + x x T w x λ w T λ x w T λ λ + xt λ + λ + ( x + x w + x x w x λ w T λ x w T λ + x I λ λ + + λ λ λ + λ λ ( λ + λ xt w w T λ + λ. λ λ ( λ + x λ w w T Substituting the first two relations in (33 for i = into the last euation and noting that x w T λ x w T λ x T λ + λ and x λ + λ are all uniformly bounded by euations (34-(35 we obtain that L x p L z = O( + x T x w O( x w T λ + λ λ ( λ + λ x T λ w w T p O( + x x w λ O( x w T λ + T w = O( + x x O( λ O( + x x w O( λ λ ( λ + λ λ x w w T x w T ( λ + λ ( x x T w λ ( λ + λ w T λ x w T ( λ + λ + λ ( x x w ( λ + λ λ w T This by Lemma 4. implies that the first two relations in (33 hold for i =. By the symmetry of x and y in w(x y the last two relations in (33 also hold. Remark 4.. The first relation of (33 for i = is euivalent to saying that. (36 λ (x p ( x T w + λ (x p ( + x T w = O( λ (w

14 56 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN whereas the second relation for i = is euivalent to saying that ( λ (x p λ (x p x ( λ (x p + λ (x p w λ (w = λ (x p ( x T w + λ (x p ( + x T w (37 ( = O(. λ (w Euations (36 and (37 play an important role in the proof of the following lemma. Lemma 4.3. There exists a positive constant c (independent of x and y such that for all (x y with w(x y intk n g soc (x g soc (z F c and g soc (y g soc (z F c. Proof. By the symmetry of x and y in g soc (z it suffices to prove the first ineuality. Fix any (x y with w = w(x y intk n. Suppose w 0andx 0. By the expressions of g soc (x and g soc (z given by Lemma 3. it is not hard to calculate that p g soc (x g soc (z a (x z a = T (x z b (x z A (x z where ( a (x z = b(x+c(xx T ( w + b(x c(xx T w λ (w λ (w ( ( b(x+c(xx T a (x z = w w b(x c(xx T w w + pc(x ( x x T w w λ (w λ (w a(z b (x z = c(xx + a(xw +(b(x a(xx T ] w x λ (w + c(xx a(xw (b(x a(xx T ] w x λ (w A (x z = c(xx w T λ (w + a(xw w T +(b(x a(xxt w x w T ] c(xx w T a(xw w T (b(x a(xx T w x w T ] λ (w + p a(x(i w w T +(b(x a(x ( x x T x T w x w T ]. a(z From the definitions of a(x b(x andc(x in (0 it follows that max( b(x c(x p ( λ (x p + λ (x p (38 a(x = p t λ (x+( t λ (x p p max( λ (x p λ (x p for some t (0 where the euality is using the mean-value theorem. Therefore (39 a(x p x p p b(x p x and c(x p x p. Noting that 0 λ (w/λ (w < and p λ (w/λ (w λ (w/λ (w we have (40 a(z = λ (w λ (w p λ (w p λ (w = λ (w λ (w/λ (w p λ (w/λ (w λ (w. ]

15 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 57 By (39 (40 and (34 we simplify a (x za (x zb (x z and A (x z as ( a (x z =O( + b(x c(xx T w λ (w ( a (x z =O( b(x c(xx T (4 w w λ (w ( b (x z=o(+ c(x b(xx T w x +a(x(x T w x w ] λ (w ( A (x z=o( c(x b(xx T w x w T +a(x(x T w x w T w w T ]. λ (w By the definitions of b(x andc(x it is easy to verify that b(x c(xx T w p λ (x p ( x T w + λ (x p ( + x T w ] c(x b(xx T w p λ (x p ( x T w + λ (x p ( + x T w ] which together with (36 implies that (4 b(x c(xx T w λ (w = O( c(x b(xx T w λ (w = O(. In addition it is easy to compute that a(x(x T w x w = a (x( x T w ( + x T w a(x(x T w x w T w w T F a (x( x T w ( + x T w. By euation (38 we have a (x p max( λ (x p λ (x p. Using (37 and noting that 0 x T w and0 +x T w we may obtain that a(x(x T w x w a(x(x T w x w T w w T F (43 = O( = O(. λ (w λ (w By (4-(43 a (x za (x zb (x z and A (x z are all uniformly bounded and hence there exists a constant C > 0 such that g soc (x g soc (z] F C. Suppose that x =0orw = 0. Then there exists a seuence {(x k y k } R n R n with x k 0w (x k y k 0andw(x k y k intk n for all k such that x k x and w k w as k. From the above result g soc (x k g soc (z k F c for all k. Noting that g soc (x is continuous since t p is continuously differentiable and g soc (z is continuous at any z(x y intk n weget g soc (x g soc (z F c. The proof is complete. Now we are in a position to prove the continuity of the gradient function ψ p. Proposition 4.. The function ψ p with p ( 4 is smooth everywhere on R n R n. Proof. By Proposition 3. and the symmetry of x and y in ψ p it suffices to prove that x ψ p is continuous at every (x y R n R n.chooseapoint(x y R n R n arbitrarily. When w(x y intk n the conclusion was shown in Proposition 3.. We next consider the other two cases where w(x y =0andw(x y bd + K n. Case. w(x y =0. Nowwehave(x y =(0 0 and x ψ p (0 0 = 0 by Proposition 3. it suffices to show that x ψ p (x y 0as(x y (0 0. If w(x y intk n then x ψ p (x y is given by (; and if w(x y bd + K n then

16 58 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN x ψ p (x y isgivenby(. Since g soc (x g soc (z I and sign(x x p x p + y p are uniformly bounded where the uniform boundedness of the former is due to Lemma 4.3 using the continuity of φ p and noting that φ p (0 0 = 0 immediately yields that x ψ p (x y 0as(x y (0 0. Case. w(x y bd + K n. For any (x y sufficiently close to (x y in order to prove that x ψ p (x y x ψ p (x y we only need to consider the cases where w(x y intk n and w(x y bd + K n.whenw(x y bd + K n x ψ p (x y has an expression of ( which is continuous at (x y since x p + y p > 0by Lemma 3. and then x ψ p (x y x ψ p (x y. We next concentrate on the case w(x y intk n for which case x ψ p (x y = g soc (x g soc (z z (44 g soc (x g soc (z (x + y φ p (x y. We next proceed with the arguments for the following two subcases: x 0and x =0. Subcase (.. x 0. By the expression of x ψ p (x y in(wehave x ψ p (x y = sign(x x p x p + y φ p(x y φ p (x y p = sign(x x p p ( x p + y p x p + y p w sign(x x p x p + y (x+ y φ p(x y p ( where the second euality is by z(x y = p x p + y p. Comparing it with w (44 we see that to prove x ψ p (x y x ψ p (x y as(x y (x y it suffices to argue that when (x y (x y (45 g soc (x g soc (z z sign(x x p p ( x p + y p x p + y p w and (46 g soc (x g soc (z x sign(x x p x p + y x p g soc (x g soc (z y sign(x x p x p + y y. p First of all we prove (45. Since w(x y bd + K n implies x p bdk n wehave (47 a(x = p sign(x x p b(x = p p sign(x x p c(x = p p x p. Since w (x y 0 (if not w(x y = 0 we have w = w (x y 0. By the expressions of g soc (x and g soc (z it is not hard to calculate that g soc (x g soc (z z euals λ (w p p g soc (x ( w + p λ (w ( g soc (x g soc (z w

17 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 59 where z = z(x y w = w(x y and w = w w. Note that λ (w 0and w w with w = w w as (x y (x y. By Lemma 4.3 the last term on the right-hand side tends to 0 whereas ( by the continuity of g soc the first term approaches p λ (w p g soc (x. Thus together with λ w (w =w = p ( x p + y p it holds that as (x y (x y ( (48 g soc (x g soc (z z p p ( x p + y p p g soc (x. w In addition using euations (47 and (9 we readily obtain that g soc (x = p p sign(x x p x T x x x p ( I + p x x T x. This along with (48 means that as (x y (x y g soc (x g soc (z z approaches sign(x x p ( x p + y p p =sign(x x p ( x p + y p p x T /x x x ( w p I + ( p x x T x ] ( w where the euality is using Lemma 3.. This shows that euation (45 holds. Next we prove the second relation of (46 and an analogous argument can be used to prove the first relation. Let (ζ ζ := g soc (x g soc (z y. We only need to establish when (x y (x y (49 ζ sign(x x p x p + y y p and ζ sign(x x p x p + y y. p Note that x 0for(x y sufficiently close to (x y. By euation (9 and the expression of g soc (z in Lemma 3. a direct calculation yields that (50 pζ = λ (w + λ (w b(x +c(x (x T w ] y +(w T y ] b(x c(x (x T w ] y (w T y ] + p a(z c(x (x T y (x T w (w T y ]

18 60 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN (5 pζ = λ (w + λ (w + p a(z + λ (w λ (w ] c(x x + a(x w +(b(x a(x (x T w x y c(x x (w T + a(x w (w T +(b(x a(x (x T w x (w T ] y ( ] a(x (I w (w T +(b(x a(x x (x T (x T w x (w T y ] c(x x a(x w (b(x a(x (x T w x y c(x x (w T a(x w (w T (b(x a(x (x T w x (w T ] y where a(x b(x andc(x are defined as in (0 with x replaced by x. λ (w b(x c(x x and w are continuous at (x y it follows that Since λ (w w = p ( x p + y p b(x +c(x (x T w ] y +(w T y ] ( b(x+c(xx T w (y + y T w as (x y (x y. This along with Lemma 3. and euation (47 implies that the first term on the right-hand side of (50 tends to p sign(x x p y. Since x p + y p y (w T y y y T w =0 (x T y (x T w (w T y x T y x T w w T y =0 whereas (b(x c(x (x T w / λ (w andpc(x /a(z are uniformly bounded by the proof of Lemma 4. the last two terms of (50 tend to 0 as (x y (x y and we prove the first relation in (49. We next prove the second relation of (49. From the above discussions the first three terms on the right-hand side of (5 respectively tend to c(xx + a(xw +(b(x a(xx T ] w x y w c(xx + a(xw +(b(x a(xx T ] w x w T y w p a(x(i w w T +(b(x a(x ( x x T x T w x w T ] y a(z as (x y (x y whose sum by Lemma 3. and formula (47 can be simplified as sign(x c(x+b(x] y = p sign(x x p y. w x p + y p Observe that the sum of the last two terms on the right-hand side of (5 can be rewritten as c(x x λ (w a(x w (b(x a(x (x T w x ] (y (w T y which clearly tends to zero as (x y (x y since the first term is uniformly bounded by the proof of Lemma 4. whereas the term y (w T y y w T y = 0. Thus we complete the proof of the second relation in (49. Conseuently the second relation in (45 follows. This shows that x ψ p (x y x ψ p (x y as (x y (x y.

19 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 6 Subcase (.. x =0. Nowwehavex =0from x p bdk n and g soc (x =0. Hence x ψ p (x y = sign(x x p (5 x p + y φ p(x y φ p (x y = φ p (0y. p On the other hand since g soc (x = 0 it follows from (48 that g soc (x g soc (z z 0 as (x y (x y; while using Lemma 4.3 and x =0wehave g soc (x g soc (z x 0 as (x y (x y; and from the continuity of φ p and x = 0 it follows that φ p (x y φ p (0y as (x y (x y. Using the last three euations and comparing (44 with (5 we see in order to prove that x ψ p (x y x ψ p (x y as(x y (x y it suffices to show that (53 g soc (x g soc (z y 0 as (x y (x y. Next for any (x y sufficiently close to (x y we write (ζ ζ := g soc (x g soc (z y. If x 0thenpζ and pζ are given by (50 and (5 respectively. Using the same arguments as Subcase. we have that the second term of (50 and the sum of the last two terms of (5 tend to 0 as (x y (x y. Since λ (w a(z b(x c(x and w are continuous at (x y x =andb(x = c(x = 0 from (47 and x = 0 the first term and the third term of (50 also tend to 0. This proves that pζ 0 as (x y (x y. We next prove that the first three terms of (5 also tend to 0. From the mean-value theorem a(x = p t λ (x +( t λ (x p for some t (0. Note that the function t p (p > is continuous on R whereas λ (x 0 and λ (x 0 as (x y (x y. So a(x 0 when (x y (x y. In addition as (x y (x y b(x 0 c(x 0 λ (w w > 0 and a(z a(z > 0. This implies that the first three terms of (5 also tend to 0. Conseuently pζ 0 as (x y (x y. Thus (53 holds for this case. If x = 0 then using (9 and the expression of g soc (z in Lemma 3. we have ζ = psign(x x p y λ (w +(w T y psign(x ] + x p y λ (w (w T y ] ζ = psign(x x p y λ (w +(w T y ] w psign(x x p λ (w + p sign(x x p a(z y w (w T y ]. y (w T y ] w Since sign(x x p is continuous and λ (w > 0 we have as (x y (x y psign(x x p y λ (w +(w T y ] psign(x x p (y + w T y =0. λ (w

20 6 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN In addition x p / λ (w is bounded with the bound independent of x and y because λ (w =w w x p by (5 when x = 0. Besides y (w T y y w T y =0as(x y (x y where the euality is due to Lemma 3.. Hence we have psign(x x p (y λ (w (w T y 0 as (x y (x y. Thus we prove that ζ 0 and the first two terms of ζ tend to 0 as (x y (x y. Since a(z = λ (w λ (w p λ (w p and λ (w (y w (w T y are continuous we have a(z λ (w λ (w λ (w p λ (w = λ (w p and y w (w T y ] y w w T y =0 as (x y (x y where the last euality is due to Lemma 3.. This together with sign(x x p sign(x x p = 0 means that the last term of ζ also tends to 0. Thus we show that ζ tends to 0 as (x y (x y. Conseuently (53 holds in this case. Remark 4.. Note that the proof of Proposition 4. is also suitable for p 4. Hence if ψ p with p 4 is differentiable then it must be continuously differentiable. To close this section we present a property of the partial gradients x ψ p and y ψ p. Since the proof is easy by a direct calculation and Lemma 3. we omit it. Lemma 4.4. For any x y R n it always holds that x x ψ p (x y + y y ψ p (x y = φ p (x y. 5. Coerciveness of the function Ψ p In this section we study under what conditions the merit function Ψ p is coercive i.e. lim sup x Ψ p (x = which plays a crucial role in analyzing the global convergence of merit function methods and euation-based methods based on φ p. The following two technical lemmas are needed. Lemma 5.. Let φ p and ψ p be given by (7 and (6 respectively. Then 4ψ p (x y φ p (x y] + max ( ( x + ( y + x y R n where ( + means the minimum Euclidean distance projection onto K n. Proof. The first ineuality is direct and the second one is due to Lemma 7(c] and the fact that p x p + y p x K n and p x p + y p y K n. Lemma 5.. Assume that {(x k y k } R n R n satisfies either of the conditions (a: λ (x k or λ (y k ; (b: {λ (x k } and {λ (y k } are bounded below λ (x k λ (y k + and xk x k y k y k 0. Then when p is a rational number it holds that lim sup k ψ p (x k y k =+.

21 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 63 Proof. If {(x k y k } satisfies (a the result follows from Lemma 5. and the fact ( x k + =min ( 0λ (x k +min ( 0λ (x k. It remains to consider the case where {(x k y k } satisfies (b. Now from the given assumptions we have (taking a subseuence if necessary x k + and y k +. Without loss of generality we assume (subseuencing if necessary that (54 lim k xk / x k = x and lim k yk / y k = y. Since {λ (x k } and {λ (y k } are bounded below there exists a fixed element d R n such that x k d intk n and y k d intk n for each k. (Indeed letting γ be the lower bound of {λ (x k } and {λ (y k } we have x k (γ e intk n and y k (γ e intk n since λ (z k (γ e λ (z k +λ (( γe γ + γ = for z k = x k or y k. So xk d x k intkn and yk d y k intkn for each k. This implies x that k x k Kn and yk y k Kn and conseuently x k K n and y k K n for all sufficiently large k. We will continue the arguments using three cases as shown below where all k are assumed to be sufficiently large. Case. The seuence { x k / y k } is unbounded. Since p is a rational number we may write p = n/m with n m being natural numbers and n>m. Suppose that the conclusion does not hold i.e. {φ p (x k y k } is bounded. From the definition of φ p and x k y k K n wehave(x k n m +(y k n m = x k + y k + φ p (x k y k ] n m which is euivalent to (55 (x k n m +(y k ] n m m = x k + y k + φ p (x k y k ] n. Since { x k / y k } is unbounded x k + y k + and n > m by expanding ] (x k n m +(y k n m m = (x k n m +(y k ] n m (x k n m +(y k ] n m we obtain that the left-hand side of (55 is (x k n +(y k n + o( x k n y k whereas }{{} m by expanding x k + y k + φ p (x k y k ] n and noting that {φp (x k y k } is bounded and { x k / y k } is unbounded the right-hand side of (55 is (x k +y k n +o( x k n y k which can be further written as (x k n +(y k n +(x k n y k +(x k ((x k n y k + + x k (x k ( ((x k y k + x k (x k (x k ( (x k y k }{{}}{{} n 3 n + o( x k n y k. Here o( x k n y k e denotes the term e k satisfying lim k k x k n y k = 0. Therefore (x k n y k +(x k ((x k n y k + + x k (x k ( ((x k y k }{{} n 3 +x k (x k (x k ( (x k y k = o( x k n y k. }{{} n Making the inner product with the unit element e for the both sides then gives n (x k n y k = o( x k n y k.

22 64 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN Dividing the both sides by x k n y k and taking the limit k we get (x n y = 0. Noting that x y K n and x = y = from (x n y = 0 we deduce that y = y and (x n = α(y y for some α>0. From this it is easy to get x y = 0 which by (54 contradicts the given condition that xk x k y k y k 0. Thus we prove that the conclusion lim sup k ψ p (x k y k =+ holds. Case. The seuence { y k / x k } is unbounded. By the symmetry of x and y in φ p (x y using the same arguments as in Case leads to the desired result. Case 3. The seuences { y k / x k } and { x k / y k } are bounded. In this case taking subseuences of {x k } and {y k } if necessary we may assume that y lim k k x k = c for some 0 <c<+. By the definition of φ p and x k y k K n we have (x k p +(y k p = x k + y k + φ p (x k y k ] p. Suppose that the conclusion does not hold. Then dividing the both sides of the last euality by x k and taking the limit k itisnothardtoobtain (x p +(cy p =(x + cy p which is euivalent to saying that φ p (x cy =0sincex y K n. Therefore x cy =0. This clearly contradicts the given condition xk x k y k 0 and the y k result follows. Remark 5.. So far we cannot prove the result of Lemma 5. for an irrational number p by using the dense of rational numbers in R although numerical test showsthatitiscorrect. In the following assume that K has the Cartesian structure in (. Recall that F : R n R n is said to have the uniform Jordan P -property if there exists a constant ϱ>0 such that for any ζξ R n there is a ν {...m} such that λ (ζ ν ξ ν (F ν (ζ F ν (ξ] ϱ ζ ξ where λ (ζ ν ξ ν (F ν (ζ F ν (ξ] means the second spectral value of (ζ ν ξ ν (F ν (ζ F ν (ξ; and F is said to have the linear growth if there is a constant c>0 such that F (ζ F (0 + c ζ. Proposition 5.. Suppose that G is an identity mapping and F has the uniform Jordan P -property and satisfies the linear growth. Then Ψ p (ζ is coercive for a rational p. Proof. Suppose on the contrary that there is a constant γ > 0 and a seuence {ζ k } R n with ζ k such that Ψ p (ζ k γ for all k. Let ζ k =(ζ k...ζm k with ζi k R n i for i =...m. Let I := { i {...m} {ζi k} is unbounded}. Clearly I. Define { ξi k 0 if i I = ζi k i =...m. otherwise Then the seuence {ξ k } is bounded. Since F has the uniform Jordan P -property λ (ζ k ξ k (F (ζ k F (ξ k ] ϱ ζ k ξ k

23 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP 65 for some ϱ>0. Let z k =(ζ k ξ k (F (ζ k F (ξ k for each k. Suppose that each z k has the spectral decomposition λ (z k u k + λ (z k u k. Then we obtain that ϱ ζ k ξ k z k u k = (ζ k ξ k (F (ζ k F (ξ k u k (56 ζ k ξ k F (ζ k F (ξ k. This implies that F (ζ k. Since Ψ p (ζ k γ for each k from Lemma 5.(a it follows that {λ (ζ k } and {λ (F (ζ k } are bounded below. Together with ζ k and F (ζ k weobtainλ (ζ k λ (F (ζ k +. In addition from (56 ζ and the linear growth of F we necessarily have lim k k ζ k F (ζk 0. If not on F (ζ k one hand from the boundedness of {ξ k (ζ } we have lim k ξ k (F (ζ k F (ξ k k =0; ζ k F (ζ k and on the other hand lim k ϱ ζ k ξ k ζ k F (ζ k lim k ϱ ζ k ξ k ζ k ( F (0 + c ζ k = ϱ c > 0 which is impossible by (56. By Lemma 5.(b lim sup k ψ p (ζ k F(ζ k =. This gives a contradiction to Ψ p (ζ k γ for all k. Hence Ψ p is coercive. 6. Numerical results In this section we check the numerical performance of the merit function Ψ p corresponding to different p ( 4] by solving the unconstrained minimization reformulation min ζ R n Ψ p (ζ for convex SOCPs in the form of (3 whose KKT conditions can be rewritten as ( with (57 F (ζ := x +(I A T (AA T Aζ and G(ζ := f(f (ζ A T (AA T Aζ where x R n satisfies A x = b. All tests were done on a PC using a Pentium 4 with.8ghz CPU and 5MB memory and the codes were all written in Matlab 6.5. During the testing we used the Cholesky factorization of AA T to evaluate F and G in (57 which was completed via Matlab routine chol. For the vector x satisfying A x = b we computed it with Matlab s solver \ i.e. x = A\b. We employed the L-BFGS method a limited-memory uasi-newton method with five limited-memory vector-updates to solve the minimization problem min ζ R n Ψ p (ζ. For the scaling matrix H 0 = γi in the L-BFGS we adopted γ = ζ T η/η T η as recommended by 7 p. 6] where ζ := ζ ζ old and η := Ψ p (ζ Ψ p (ζ old. To ensure convergence we reverted to the steepest descent direction Ψ p (ζ whenever ζ T η 0 5 ζ η. We adopted a nonmonotone line search in 7] to seek a suitable step-length i.e. we computed the smallest nonnegative integer l such that Ψ p (ζ k + ρ l d k W k + σρ l Ψ p (ζ k T d k where d k means the direction in kth iteration generated by L-BFGS ρ σ (0 are given parameters and W k =max j=k mk...k Ψ p (ζ j where for given nonnegative integers m and s { m k = 0 if k s min { m k + m } otherwise. For p = 4 we use the gradient formula in Proposition 3. though its differentiability is not established.

24 66 SHAOHUA PAN SANGHO KUM YONGDO LIM AND JEIN-SHAN CHEN Throughout the experiments we chose the following parameters for the algorithm: ρ =0.5 σ =0 4 m =5 and s =5. The initial point was set to be ζ 0 = 0 and the algorithm was terminated whenever max{ψ p (ζ k F (ζ k G(ζ k } 0 6 or the step-length is less than 0. The first group of test instances is the linear SOCPs with sparse A from 9]. Numerical results are reported in Table where the first row gives the name of problems and the dimension (m n ofa and in the second row Ψ p (ζ meansthe value of Ψ p at the final iteration Gap denotes the value of G(ζF(ζ at the final iteration NF and Cpu records the number of function evaluations and the CPU time in seconds for solving each test problem and means that the method fails due to too small a step-length. Table shows that the merit function method based on Ψ p with p. 4] can solve nb-l and nb-l-bessel successfully but for problem nb it fails due to too small a step-length when p>3.5 or p<. and it cannot yield desirable decrease for Ψ p even for p =. The convergence process plotted in Figure for nb-l-bessel shows that the method with a smaller p has a faster reduction of Ψ p at the beginning of the iterations and when the value of Ψ p (ζ islessthan0 5 themethodwithalargerp has a faster reduction of Ψ p. Thisimpliesthatthe method with a larger p seems to have a better convergence rate which coincides with the numerical performance of generalized FB NCP functions; see 8]. Table. Numerical results with different p for three DIMACS problems nb (3 383 nb-l (3 495 nb-l-bessel (3 64 p Ψ p (ζ NF Gap Cpu Ψ p (ζ NF Gap Cpu Ψ p (ζ NF Gap Cpu 4.0.e e e e e e e e e e e e e e e e e e e e e e 7.9.8e e e e e e e e e e e e e e e e e e e e e e e e e e

25 ON THE GENERALIZED FB MERIT FUNCTION FOR SOCCP Merit Func values v.s. Iterations p=. p= p=3 p=4 0 4 Merit Func values Iterations Figure. Numerical results with different p for the problem nb- L-bessel The second group of test problems is the convex SOCPs with dense A. To generate such test problems randomly we considered the problem of minimizingasumofthek largest Euclidean norms with a convex regularization term: k min u 0 i= s i] + h(u where s ]... s r] are the norms s... s r sorted in nonincreasing order with r k and s i = b i A i x for i =...r with A i R mi l and b i R m i andh: R l R is a twice continuously differentiable convex function. The problem can be converted into min ( k/r r i= v i +(k/r r i= w i + h(u s.t. A i u + s i = b i i =...r (w v (w v =0.. (w v (w r v r =0 u 0 v i 0 (w i s i K mi+ i =...r. In our tests we set h(u := 3 u 3 3 with 3 being the 3-norm and generated each m i randomly from { 3...0} and each entry of A i and b i randomly by a uniform distribution from the interval ] and 5 5] respectively. Thus if d m = m + + m r the constraint matrix is dense. The problem parameters and the numerical results are reported in Table in which the first row lists several groups of different (l r k and the second row gives the corresponding dimension (m n ofa. From Table we see that for the test problems with dense A Ψ p displays a similar performance to those with sparse A.

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem Math. Program., Ser. B 104, 93 37 005 Digital Object Identifier DOI 10.1007/s10107-005-0617-0 Jein-Shan Chen Paul Tseng An unconstrained smooth minimization reformulation of the second-order cone complementarity

More information

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National

More information

Error bounds for symmetric cone complementarity problems

Error bounds for symmetric cone complementarity problems to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Lingchen Kong, Levent Tunçel, and Naihua Xiu 3 (November 6, 7; Revised December, 7) Abstract Recently, Gowda et al. [] established

More information

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department

More information

Two unconstrained optimization approaches for the Euclidean κ-centrum location problem

Two unconstrained optimization approaches for the Euclidean κ-centrum location problem Noname manuscript No (will be inserted by the editor) Shaohua Pan and Jein-Shan Chen Two unconstrained optimization approaches for the Euclidean κ-centrum location problem Abstract Consider the single-facility

More information

A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints

A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints Hiroshi Yamamura, Taayui Ouno, Shunsue Hayashi and Masao Fuushima June 14, 212 Abstract In this

More information

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1 Exact penalty decomposition method for zero-norm minimization based on MPEC formulation Shujun Bi, Xiaolan Liu and Shaohua Pan November, 2 (First revised July 5, 22) (Second revised March 2, 23) (Final

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Some characterizations for SOC-monotone and SOC-convex functions

Some characterizations for SOC-monotone and SOC-convex functions J Glob Optim 009 45:59 79 DOI 0.007/s0898-008-9373-z Some characterizations for SOC-monotone and SOC-convex functions Jein-Shan Chen Xin Chen Shaohua Pan Jiawei Zhang Received: 9 June 007 / Accepted: October

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

The Jordan Algebraic Structure of the Circular Cone

The Jordan Algebraic Structure of the Circular Cone The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

arxiv: v2 [math.oc] 31 Jul 2017

arxiv: v2 [math.oc] 31 Jul 2017 An unconstrained framework for eigenvalue problems Yunho Kim arxiv:6.09707v [math.oc] 3 Jul 07 May 0, 08 Abstract In this paper, we propose an unconstrained framework for eigenvalue problems in both discrete

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Manual of ReSNA matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Shunsuke Hayashi September 4, 2013 1 Introduction ReSNA (Regularized

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Permutation invariant proper polyhedral cones and their Lyapunov rank

Permutation invariant proper polyhedral cones and their Lyapunov rank Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Journal of Mathematical Analysis and Applications. A one-parametric class of merit functions for the symmetric cone complementarity problem

Journal of Mathematical Analysis and Applications. A one-parametric class of merit functions for the symmetric cone complementarity problem J. Math. Anal. Appl. 355 009 195 15 Contents lists available at ScienceDirect Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa A one-parametric class of merit functions for

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

Sparse Approximation via Penalty Decomposition Methods

Sparse Approximation via Penalty Decomposition Methods Sparse Approximation via Penalty Decomposition Methods Zhaosong Lu Yong Zhang February 19, 2012 Abstract In this paper we consider sparse approximation problems, that is, general l 0 minimization problems

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that:

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

Xin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China

Xin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China Pacific Journal of Optimization, vol. 14, no. 3, pp. 399-419, 018 From symmetric cone optimization to nonsymmetric cone optimization: Spectral decomposition, nonsmooth analysis, and projections onto nonsymmetric

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS Adv. Oper. Theory 3 (2018), no. 1, 53 60 http://doi.org/10.22034/aot.1702-1129 ISSN: 2538-225X (electronic) http://aot-math.org POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Lecture 7 Monotonicity. September 21, 2008

Lecture 7 Monotonicity. September 21, 2008 Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

On the Moreau-Yosida regularization of the vector k-norm related functions

On the Moreau-Yosida regularization of the vector k-norm related functions On the Moreau-Yosida regularization of the vector k-norm related functions Bin Wu, Chao Ding, Defeng Sun and Kim-Chuan Toh This version: March 08, 2011 Abstract In this paper, we conduct a thorough study

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

The Second-Order Cone Quadratic Eigenvalue Complementarity Problem. Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D.

The Second-Order Cone Quadratic Eigenvalue Complementarity Problem. Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D. The Second-Order Cone Quadratic Eigenvalue Complementarity Problem Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D. Sherali Abstract: We investigate the solution of the Second-Order Cone

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S}

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S} 6 Optimization The interior of a set S R n is the set int X = {x 2 S : 9 an open box B such that x 2 B S} Similarly, the boundary of S, denoted@s, istheset @S := {x 2 R n :everyopenboxb s.t. x 2 B contains

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Proximal-Like Algorithm Using the Quasi D-Function for Convex Second-Order Cone Programming

Proximal-Like Algorithm Using the Quasi D-Function for Convex Second-Order Cone Programming J Optim Theory Appl (2008) 138: 95 113 DOI 10.1007/s10957-008-9380-8 Proximal-Lie Algorithm Using the Quasi D-Function for Convex Second-Order Cone Programming S.H. Pan J.S. Chen Published online: 12 April

More information

The speed of Shor s R-algorithm

The speed of Shor s R-algorithm IMA Journal of Numerical Analysis 2008) 28, 711 720 doi:10.1093/imanum/drn008 Advance Access publication on September 12, 2008 The speed of Shor s R-algorithm J. V. BURKE Department of Mathematics, University

More information

Notes on Householder QR Factorization

Notes on Householder QR Factorization Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information