A proximal point algorithm for DC functions on Hadamard manifolds

Size: px
Start display at page:

Download "A proximal point algorithm for DC functions on Hadamard manifolds"

Transcription

1 Noname manuscript No. (will be inserted by the editor) A proimal point algorithm for DC functions on Hadamard manifolds J.C.O. Souza P.R. Oliveira Received: date / Accepted: date Abstract An etension of a proimal point algorithm for difference of two conve functions is presented in the contet of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bounded it is proved that every cluster point is a critical point of the function (not necessarily conve) under consideration, even if minimizations are performed ineactly at each iteration. Application in maimization problems with constraints, within the framework of Hadamard manifolds is presented. Keywords Nonconve optimization proimal point algorithm DC functions Hadamard manifolds Mathematics Subject Classification (2000) 49M30 90C26 90C48 1 Introduction It is well known that the class of Proimal Point Algorithm (PPA) is one of the most studied methods for finding zeros of maimal monotone operators and, in particular it s used to solve conve optimization problems. The classical PPA was introduced into optimization literature by Martinet [1]. It is based on the notion of proimal mapping J f λ, J f λ () = arg min z R n{f(z) + 1 2λ z 2 }, (1) This research was partially supported by CNPq, Brazil. J.C.O. Souza COPPE-Sistemas, Universidade Federal do Rio de Janeiro, Caia Postal 68511, CEP , Rio de Janeiro, RJ, Brazil and CEAD, Universidade Federal do Piauí, Teresina, PI, Brazil joaocos.mat@ufpi.edu.br P.R. Oliveira COPPE-Sistemas, Universidade Federal do Rio de Janeiro, Caia Postal 68511, CEP , Rio de Janeiro, RJ, Brazil poliveir@cos.ufrj.br

2 2 J.C.O. Souza, P.R. Oliveira introduced earlier by Moreau [2]. The PPA was popularized by Rockafellar [3], who showed the algorithm converges even if the auiliary minimizations in (1) are performed ineactly, which is an important consideration in practice. The algorithm is useful, however, only for conve problems, because the idea underlying the results is based on the monotonicity of subdifferential operators of conve functions. Therefore, PPA for nonconve functions has been investigated by many authors (cf. [4],[5] and references therein). In Rockafellar [3], the algorithm starting with any 0 R n, iteratively updates k+1 conforming to the following recursion 0 T ( k+1 ) + k+1 k, (2) where 0 < c, is a sequence of scalars and T is a multivalued maimal monotone operator from R n to itself. On the other hand, etension to Riemannian manifolds of the concepts and techniques that fit in Euclidean spaces is natural and nontrivial. Actually, in recent years, some algorithms defined to solve minimization problems have been etended from Hilbert space framework to the more general setting of Riemannian manifolds (see, for eample [6]-[20]). The main advantages of these etensions are that nonconve problems in the classic sense may become conve and constrained optimization problems may be seen as unconstrained ones through the introduction of an appropriate Riemannian metric (see [6]-[10]). Numerical solution of optimization problems defined on Riemannian manifolds arise in a variety of applications, e.g., in computer vision, signal processing, motion and structure estimation, or numerical linear algebra (see for instance [21]-[24]). Also, these etensions give rise to interesting theoretical questions. To etend (1) and (2) to the contet of Riemannian manifolds was the subject of [6] and [11], respectively. We will consider a special class of nonconve optimization problem of the form min f() = g() h(), (3) M where g, h : M R are proper, conve and lower semi-continuous (lsc) functions and M is a complete Riemannian manifold. The function f is called a DC function (i.e. difference of two conve functions). The interest in the theory of DC functions has much increased in the last years (see for instance [25]-[29] and references therein), but only a few have proposed some specific algorithms or numerical eperiments (for eample [30]-[31]). Some mathematical reasons used to eplain interest in DC functions can be found in [26], for instance, the class of DC functions defined on a compact conve set X R n is dense in the set of continuous function over X, endowed with the topology of uniform convergence over X. Sun et al [30] proposed a proimal point algorithm for minimization of DC functions which use conve properties of the two conve functions separately. The purpose of this paper is to etend the PPA presented in [30] to Riemannian manifolds framework. Also, two different ineact methods of our algorithm are considered. Moreover, an application to the constrained optimization problems on Hadamard manifolds is given. To the best of our knowledge a proimal point algorithm to solve DC optimization problems in the contet of Riemannian manifolds has not been established yet. The paper is organized as follows. In Sect. 2, some fundamental definitions, properties and notations of Riemannian manifolds are presented. In Sect. 3, some definitions, notations and properties of conve analysis on Riemannian manifolds

3 A proimal point algorithm for DC functions on Hadamard manifolds 3 are presented. Convergence analysis of the eact version and ineact versions of the algorithm are provided in Sect. 4 and 5, respectively. In Sect. 6, an application to constrained optimization problems on Hadamard manifolds is given. 2 Basic Concepts In this section, we introduce some fundamental properties and notations of Riemannian manifold. These basics facts can be found in any introductory book of Riemannian geometry, for eample [32], [33]. Let M be a connected m-dimensional C manifold and let T M = {(, v) : M, v T M} be its tangent bundle, where T M is the tangent space of M at. T M is a linear space and has same dimension of M, moreover, because we restrict ourselves to real manifolds, it is isomorphic to R m. If M is endowed with a Riemannian metric g, then M is a Riemannian manifold and we denoted it by (M, g). The inner product of two vectors u and v in T M is written u, v := g (u, v) where g is the metric at the point. The norm of a vector u T M is defined by u := u, u 1/2. Recall that the metric can be used to define the length of piecewise smooth curve c : [a, b] M joining to, i.e., such that c(a) = and c(b) =, by L(c) = b a c (t) dt. Minimizing this length functional over the set of all such curves we obtain a Riemannian distance d(, ) which induces the original topology on M. Let be the Levi-Civita connection associated to (M, g). A vector field V along c is said to be parallel if c V = 0. If c itself is parallel we say that c is a geodesic. The geodesic equation γ γ = 0 is a second order nonlinear ordinary differential equation, then γ = γ v (, ) is determined by its position and velocity v at. It is easy to check that γ is constant. We say that γ is normalized if γ = 1. The restriction of a geodesic to a closed bounded interval is called a geodesic segment. A geodesic segment joining to in M is said to be minimal if its length equals d(, ) and this geodesic is called a minimizing geodesic. A Riemannian manifold is complete if geodesics are defined for any values of t. Hopf-Rinow s theorem asserts that if this is the case then any pair of points, say and, in M can be joined by a (not necessarily unique) minimal geodesic segment. Moreover, (M, d) is a complete metric space and bounded and closed subsets are compact. In this paper, all manifolds are assumed to be complete. Take M, the eponential map ep : T M M is defined by ep (v) = γ v (1, ). We denote by R the curvature tensor defined by R(X, Y ) = X Y Z Y X Z [Y,X] Z, where X, Y and Z are vector fields of M and [X, Y ] = Y X XY. Then the sectional curvature with respect to X and Y is given by K(X, Y ) = ( R(X, Y )Y, X )/( X 2 Y 2 X, Y 2 ), where X 2 = X, X. If K(X, Y ) 0 for all X and Y, then M is called a Riemannian manifold of nonpositive curvature and we use the short notation K 0. A complete simply connected Riemannian manifold of nonpositive sectional curvature is called a Hadamard manifold. The following result is well known (see, for eample [33], Theorem 4.1, p.221). Theorem 1 Let M be a Hadamard manifold and let p M. Then ep p : T p M M is a diffeomorphism, and for any two points p, q M there eists a unique normalized geodesic joining p to q, which is, in fact, a minimal geodesic.

4 4 J.C.O. Souza, P.R. Oliveira This theorem shows that M is diffeomorphic to the Euclidean space R n. Thus, we see that M has the same topology and differential structure as R n. Moreover, Hadamard manifolds and Euclidean spaces have some similar geometrical properties. One of the most important properties is described in the following theorem, which is taken from ([33], Proposition 4.5, p.223) and will be useful in our study. Recall that a geodesic triangle (p 1, p 2, p 3 ) of a Riemannian manifold is the set consisting of three distinct points p 1, p 2 and p 3 called the vertices and three minimizing geodesic segments γ i+1 joining p i+1 to p i+2 called the sides, where i = 1, 2, 3(mod3). Theorem 2 (Comparison theorem for triangles) Let M be a Hadamard manifold and ( 1, 2, 3 ) a geodesic triangle. Denote by γ i+1 : [0, l i+1 ] M geodesic segments joining i+1 to i+2 and set l i+1 := L(γ i+1 ), θ i+1 = (γ i+1(0), γ i(l i )), where i = 1, 2, 3(mod3). Then θ 1 + θ 2 + θ 3 π (4) l 2 i+1 + l 2 i+2 2l i+1 l i+2 cosθ i+2 l 2 i. (5) Let γ : [a, b] M be a normalized geodesic segment. A differentiable variation of γ is by definition a differentiable mapping α : [a, b] ( ɛ, ɛ) M satisfying α(t, 0) = γ(t). The vector field along γ defined by V (t) = ( α/ s)(t, 0) is called the variational vector field of α. The first variational formula of arc length on α is given as follows: L (γ) := d ds L(c s) s=0 = V, γ b a, (6) where c s (t) = α(t, s) with s ( ɛ, ɛ). The Riemannian distance plays a fundamental role in the net sections. We proceed now stating a result which we will go to use. Let M be a Hadamard manifold. For any M we can define the eponential inverse map ep 1 : M T M which is C. Since d(, ) = ep 1 (), then the map ρ : M R defined by ρ () = 1 2 d2 (, ) is C and its gradient at is gradρ () = ep 1 ( ) (see, [33]). Using the properties of the parallel transport and the eponential map, we obtain the following proposition that will be used in the net sections. Proposition 1 Let M be a Hadamard manifold. Let 0 M and { k } M be such that k 0. Then the following assertions hold. 1. For any y M, we have ep 1 k y ep 1 0 y and ep 1 y k ep 1 y If v k T km and v k v 0, then v 0 T 0M. 3. Given u k, v k T km and u 0, v 0 T 0M, if u k u 0 and v k v 0, then u k, v k u 0, v For any u T 0M, the function F : M T M defined by F () = P, 0u for each M is continuous on M. Proof See [11], Lemma 2.4, p. 666.

5 A proimal point algorithm for DC functions on Hadamard manifolds 5 3 Conveity on Riemannian Manifolds In this section, we introduce some definitions and notation of conveity on Riemannian manifolds. We also present some properties of the subdifferential of a conve function; see [34] for more details. A subset C M is said to be conve if, for any points p and q in C, the geodesic joining p to q is contained in C, that is, if γ : [a, b] M is a geodesic such that γ(a) = p and γ(b) = q, then γ((1 t)a + tb) C for all t [0, 1]. Let f : M R be a proper etended real-valued function. The domain of the function f is denoted by dom(f) and defined by dom(f) = { M : f() + }. The function f is said to be conve (respectively, strictly conve) if, for any geodesic segment γ : [a, b] M, the composition f γ : [a, b] R is conve (respectively, strictly conve), that is, (f γ)(ta + (1 t)b) t(f γ)(a) + (1 t)(f γ)(b), for any a, b R and 0 t 1. The subdifferential of f at is defined by f() = {u T M ; u, ep 1 y f(y) f(), y M}. (7) Then f() is a closed conve (possible empty) set. The proofs of the above assertions and the following propositions can be found in [6] and [34]. Proposition 2 Let { k } M a bounded sequence. If the sequence {v k } is such that v k f( k ) for each k N, then {v k } is also bounded. Proposition 3 Let M be a Hadamard manifold and let f : M R be conve function. Then, for any M, there is s T M such that f(y) f() + s, ep 1 y, y M. In other words, the subdifferential f() of f at M is nonempty. Proposition 4 If a function f : M R is conve, then for any M and λ > 0, there eists a unique point, denoted by p λ (), such that f(p λ ()) + λ 2 d2 (p λ (), ) = f λ () characterized by λ(ep 1 p λ () ) f(p λ()), where f λ () = inf y M {f(y)+λd 2 (, y)}. 4 Proimal Point Algorithm Let M be a Hadamard manifold and let f : M R be a DC function, i.e., f() = g() h(), where g, h : M R are proper, conve and lsc functions satisfying dom(g) dom(h). A necessary condition for to be a local minimum of f is that 0 f() g() h(). In other words, the subdifferentials g() and h() must overlap: g() h(). A similar condition holds true when is a local maimum of f. So, we will focus our attention on finding critical points of f. The set of critical points of f is defined by

6 6 J.C.O. Souza, P.R. Oliveira S = { M; h() g() }. Observe that a necessary and sufficient condition for a point to be a critical point of a DC function f is that 1 c ep 1 y g(), where y = ep (cw), for any w h() and c > 0 a real number. Throughout the remainder of this paper, we always assume that M is a Hadamard manifold, f : M R is a bounded from below DC function, such that f() = g() h(), and S. For finding critical points of a DC functions on Hadamard manifolds, which satisfies necessary optimality conditions, we consider the following algorithm: Algorithm (DCPPA) Step 1: Given an initial point 0 M and a bounded sequence of positive numbers { } [b, c]. Step 2: Compute w k h( k ) and set y k := ep k( w k ). (8) Step 3: Compute k+1 := arg min M {g() d 2 (, y k )}. (9) If k+1 = k, stop. Otherwise, k := k + 1 and return to Step 2. The well definition of the sequences { k } and {y k } follows immediately from Proposition 3 and 4. Note that when h() = 0, algorithm DCPPA becomes eactly the algorithm proposed by [6]. If M = R n algorithm DCPPA reduces to the algorithm proposed in [30]. Therefore, the algorithm DCPPA on Hadamard manifolds is a natural generalization of the proimal point algorithm for DC functions on R n defined by Sun at al [30] and more general than proimal point algorithm proposed by Ferreira and Oliveira [6]. Now we shall establish the convergence of the algorithm. We begin by showing that algorithm DCPPA is a decent algorithm. Theorem 3 The sequence { k } generated by algorithm DCPPA satisfies: 1. either the algorithm stops at a critical point; 2. or f decreases strictly, i.e., f( k+1 ) < f( k ), k 0. Proof It follows from (8) and (9) that and w k = 1 ep 1 k y k h( k ) (10) 1 ep 1 k+1 y k g( k+1 ). (11) If k+1 = k the algorithm stops and, this clearly implies that 1 ep 1 y k k h( k ) g( k ), which means, k S. Now, suppose k+1 k. Using (10) and (11) in (7), we obtain that

7 A proimal point algorithm for DC functions on Hadamard manifolds 7 and h() h( k ) + 1 ep 1 k y k, ep 1, M k g() g( k+1 ) + 1 ep 1 k+1 y k, ep 1 k+1, M. Adding the last inequalities with = k+1 in the first one and = k in the second one, we have f( k ) f( k+1 ) + 1 [ ep 1 k ] y k, ep 1 k+1 + ep 1 k y k, ep 1 k+1 k. (12) k+1 Now, consider the geodesic triangle (y k, k, k+1 ) and set θ = (ep 1 k By Theorem 2, we have d 2 (y k, k ) + d 2 ( k, k+1 ) 2d(y k, k )d( k, k+1 ) cos θ d 2 (y k, k+1 ). Since ep 1 k y k, ep 1 k+1 = 2d(y k, k )d( k, k+1 ) cos θ, so that k ep 1 y k, ep 1 k k+1 1 k 2 d2 (y k, k ) d2 ( k, k+1 ) 1 2 d2 (y k, k+1 ). Similarly, considering the geodesic triangle (y k, k+1, k ) and setting θ = (ep 1 y k, ep 1 k ), we have k+1 k+1 y k, ep 1 k k+1 ). ep 1 k+1 y k, ep 1 k+1 k 1 2 d2 (y k, k+1 ) d2 ( k, k+1 ) 1 2 d2 (y k, k ). Adding the last two inequalities, we obtain that ep 1 k y k, ep 1 k+1 + ep 1 k y k, ep 1 k+1 k d 2 ( k, k+1 ). k+1 Combining the above inequality with (12), we have that this means that f( k+1 ) < f( k ). f( k ) f( k+1 ) + 1 d 2 ( k, k+1 ), (13) Corollary 1 Consider { k } generated by algorithm DCPPA, then the sequence {f( k )} is convergent. Proof Being f bounded from below due to the last theorem that the sequence {f( k )} is bounded and thus has at least one cluster point. Indeed, let {f( k )} admit two different cluster points f 1 < f 2. Let f( k j ) and f( k l ) be two subsequences converging to f 1 and f 2, respectively. Set ɛ = f 2 f 1 2, then there eist k j0, k l0 N such that f( k j ) < f 1 + ɛ f 2 ɛ < f( k l ), for all k j, k l k 0 = ma{k j0, k l0 }. By virtue of item 2 of last theorem, we have f( k j ) f( k 0 ) < f 1 + ɛ = f 2 ɛ. This is a contradiction, and hence {f( k )} has at the most one cluster point.

8 8 J.C.O. Souza, P.R. Oliveira Corollary 2 If f is a continuous function and { k } is bounded, then lim k f(k ) = f(), for some cluster point of { k }. Proof Let { k j } be any convergent subsequence with limit M. Since f is continuous, then f( k j ) f(). Thus, for a given ɛ > 0, there eists j 0 N such that j j 0, we have f( k j ) f() < ɛ. Since f is a decent function, we obtain that f( k ) f() = f( k ) f( k j 0 ) + f( k j0 ) f() f( k j0 ) f() < ɛ, k k j0, for an arbitrary ɛ > 0 and the proof is concluded. Proposition 5 Consider { k } generated by algorithm DCPPA, then. In particular lim k + d(k, k+1 ) = 0. Proof From (13), we have that and, therefore 1 d 2 ( k, k+1 ) f( k ) f( k+1 ) n 1 1 d 2 ( k, k+1 ) f( 0 ) f( n ). Since f is bounded from below and { } is bounded, we obtain, and it follows that lim k + d(k, k+1 ) = 0. d 2 ( k, k+1 ) < d 2 ( k, k+1 ) < According to Proposition 2, in Algorithm DCPPA, if { k } is bounded, then {w k } is also bounded. Since the eponential mapping is a diffeomorphism, we also have that {y k } is bounded. Theorem 4 Suppose that { k } is bounded. Then every cluster-point of { k } is a critical point of the function f. Proof Let and y be cluster points of { k } and {y k }, respectively. So, consider two subsequences k j and y k l converging respectively to and y, i.e., k j and y k l y. From definition of the sequences { k }, {y k } and { } [b, c], we have and h(z) h( k j ) + 1 c ep 1 k j yk l, ep 1 z, z M k j g(z) g( k j+1 ) + 1 c ep 1 k j +1 y k l, ep 1 k j +1 z z M. Taking k j, k l and making use of the Proposition 5, we have that 1 c ep 1 y h() and 1 c ep 1 y g(). In other words that is a critical point of f.

9 A proimal point algorithm for DC functions on Hadamard manifolds 9 Corollary 3 Suppose that { k } is bounded and S is a singleton, i.e., S = { }, then the entire sequence { k } converges to. Proof Suppose, by contradiction, that there eist an ɛ > 0, such that d( k, ) ɛ, (14) k k 0. Note that there eist a subsequence { k j } { k } such that k j. By the theorem we have S. But S = { }, and thus =. Therefore, for all ɛ > 0, there eist k 0 N such that d( k j, ) = d( k j, ) < ɛ, k k 0, violating (14). This completes the proof. Remark 1 If the level set of f and the subdifferential of h are compact and bounded, respectively, then the sequences { k } and {y k } are bounded. If f is strictly conve and coercive or strongly conve, then S is a singleton. Remark 2 It is worthwhile to point out that under the assumptions of Corollary 2, if f satisfies the sharp minima condition (see Polyak [35]), then the whole sequence { k } converges to some point S. Regarding weak sharp minima introduced by Ferris [36], it was recently considered on the contet of Riemannian manifolds by Li et. al. [16] and finite termination of Proimal Point Algorithm on Hadamard manifolds by Bento and Cruz Neto [37]. We hope that this paper may stimulate further research involving Algorithm DCPPA and these concepts. 5 Ineact Version Here we consider the approimate version obtained by replacing the eact subdifferential by the approimate one, since the function h and g are assumed to be conve, proper and lower semicontinuous. We define 0 h() = h() and 0 g() = g(), for any M. Furthermore, directly from the definition it follows that 0 ɛ 1 ɛ 2, then and ɛ1 h() ɛ2 h() ɛ1 g() ɛ2 g(). We recall that a vector w T M is called an ɛ-subgradient (with ɛ 0) of f at dom(f), denoted by w ɛ f(), if f(y) f() + w, ep 1 y ɛ, y M. Thus ɛ h() and ɛ g() are an enlargement of h() and g(), respectively. The use of elements in ɛ h() and ɛ g() instead of h() and g() allows an etra degree of freedom which is very useful in various applications. Setting ɛ = 0 one retrieves the eact subdifferential. To this reason we consider the following ineact version of Algorithm DCPPA: Algorithm (IDCPPA-1) Step 1: Given an initial point 0 M, a bounded sequence of positive numbers

10 10 J.C.O. Souza, P.R. Oliveira { } [b, c] and ɛ k 0. Step 2: Compute Step 3: Compute w k ɛk h( k ) and set y k := ep k( w k ). (15) k+1 : arg min M {g() d 2 (, y k )} 1 ep 1 k+1 y k ɛk g( k+1 ) (16) If k+1 = k, stop. Otherwise, k := k + 1 and return to Step 2. Theorem 5 Let { k } be a sequence generated by Algorithm IDCPPA-1. Suppose that { k } is bounded and + ɛ k <. Then the sequence {f( k )} is convergent and every cluster-point of { k } is critical point of the function f. Proof Similar to Theorem 3, we have that Then, f( k ) f( k+1 ) + 1 d 2 ( k, k+1 ) 2ɛ k. n 1 1 n 1 d 2 ( k, k+1 ) f( 0 ) f( n ) + 2 ɛ k. c Since f is bounded from below, the inequality above clearly implies that d 2 ( k, k+1 ) <, thanks to the summable assumption of {ɛ k }. Thus, lim k + d(k, k+1 ) = 0. Now, let and y be cluster points of { k } and {y k }, respectively. So, consider two subsequences k j and y k l converging respectively to and y, i.e., k j and y k l y. From definition of Algorithm IDCPPA-1, we have and h(z) h( k j ) + 1 c ep 1 k j yk l, ep 1 k j z ɛ k j, z M Since, that 1 c ep 1 g(z) g( k j+1 ) + 1 c ep 1 k j +1 y k l, ep 1 k j +1 z ɛ kl, z M. lim ɛ k = 0 and k lim k + d(k, k+1 ) = 0, taking k j, k l, we have y h() and 1 c ep 1 y g(). The proof is complete. As remarked in Rockafellar [3], for a proimal point method to be practical, it is also important that it should work with approimate solutions of the subproblems. To the best of our knowledge, approimate solutions of proimal points has not been found to be eplored in DC functions setting. For this reason, we provide an ineact proimal point algorithm for DC functions with a relative error tolerance. Algorithm (IDCPPA-2) Step 1: Given an initial point 0 M and a bounded sequence of positive numbers

11 A proimal point algorithm for DC functions on Hadamard manifolds 11 { } [b, c]. Step 2: Compute Step 3: Compute where w k h( k ) and set y k := ep k( w k ). (17) e k+1 g( k+1 ) 1 ep 1 k+1 y k (18) e k+1 ηd( k+1, k ), η [0, 1). (19) If k+1 = k, stop. Otherwise, k := k + 1 and return to Step 2. When k+1 = k or η = 0, (19) obviously implies that e k+1 = 0 and the Algorithm IDCPPA-2 reduces to the Algorithm DCPPA. Theorem 6 Let { k } be a sequence generated by Algorithm IDCPPA-2. Suppose that { k } is bounded, then the sequence {f( k )} is convergent and every clusterpoint of { k } is a critical point of the function f. Proof Similar to Theorem 3, we have which by (19) imply that f( k ) f( k+1 ) + 1 d 2 ( k, k+1 ) + e k+1, ep 1 k+1 k, f( k ) f( k+1 ) + (1 η) d 2 ( k, k+1 ) > f( k+1 ), if k+1 k. Otherwise, the algorithm stops. Since f is bounded from below, we have that the sequence {f( k )} is convergent. Furthermore, (1 ηc) c n 1 The inequality above obviously implies that d 2 ( k, k+1 ) f( 0 ) f( n ). d 2 ( k, k+1 ) <. Thus lim k + d(k, k+1 ) = 0. Now, let and y be cluster points of { k } and {y k }, respectively. So, consider two subsequences k j and y k j converging respectively to and y (here we will use the same notation for the inde even if it needs etracting other subsequences), i.e., k j and y k j y. From definition of the Algorithm IDCPPA-2, we have and h(z) h( k j ) + 1 ep 1 c k j yk j, ep 1 z, z M k j kj g(z) g( k j+1 ) + 1 j ep 1 k j +1 y k j, ep 1 k j +1 z + e k j+1, ep 1 k j +1 k j, z M.

12 12 J.C.O. Souza, P.R. Oliveira By passing to the limit in the above relations, since lim k ek = 0, lim k + d(k, k+1 ) = 0 and taking into account the fact that the functions g, h are lsc and { } is bounded, we have that h() g(), in other words that is a critical point of f. 6 Eample and application In this section we present an eample of a nonconve minimization problem where the objective function is defined on the Poincaré half plane (a Hadamard manifold with curvature identically to 1). In the eample, the proimal point algorithm proposed by Ferreira and Oliveira [6] does not apply. However, the method proposed in this article applies. Also, an application to constrained maimization problems on Hadamard manifolds is given. 6.1 Eample Consider the Poincaré upper half-plane H = {(u, v) R 2 : v > 0} endowed with the Riemannian metric defined for every (u, v) H by g ij (u, v) = 1 v 2 δ ij, for i, j = 1, 2. The pair (H, g) is a Hadamard manifold with constant sectional curvature 1 and the geodesics in H are the semi-lines and the semicircles orthogonal to the line v = 0 (see [34] page 20) with the following natural parameterizations γ a : u = a, v = e s, s (, + ); γ b,r : u = b rtanh s, v = r, s (, + ). cosh s The geodesic passing at moment s = s 0 through the point p = (, y) tangent to the vector w = (u, v) T p H is { (, ye s s 0 ), for ) u = 0, v = y; γ(s) = vu w 2 ( + w + u tanh(s), y w 1 u, for u 0, cosh(s) where s [s 0, ) and w = u 2 + v 2. Consider the geodesic passing at moment t = 0 through the point p = (, y) tangent to the vector w = (u, v) T p H. Hence, the eponential map is defined by { ep p w = γ(1) = ( + vu w + w 2 u tanh(1 + s 0), y w u (, ye), for ) u = 0, v = y;, for u 0. 1 cosh(1+s 0 ) The Riemannian distance between two points (u 1, v 1 ), (u 2, v 2 ) H is given by d((u 1, v 1 ), (u 2, v 2 )) = arccosh (1 + (u 2 u 1 ) 2 + (v 2 v 1 ) 2 ). 2v 1 v 2 Let f : H R be a function given by f(, y) = 4 + y y Note that f is bounded from below and f is not a conve function on H while g(, y) = 4 + y 4 and h(, y) = y are conve functions on H. Clearly, the set of critical points of f is nonempty. Therefore, f agree with the assumptions made. Thus, the Algorithm DCPPA can be applied.

13 A proimal point algorithm for DC functions on Hadamard manifolds Application to constrained maimization problems We consider the problem of maimizing a conve lower semi-continuous function h on a closed conve set C M, namely ma h(). (20) C This problem can be rewritten as a DC problem, and (20) is equivalent to the following problem: min M {δ C() h()}, (21) where δ C () is the indicate function defined by δ C () = 0 if C and δ C () = + otherwise. Let N C () denote the normal cone of the set C at a point C: Then N C () := {u T M; u, ep 1 y 0 y C}. δ C () = N C (), C. In this contet Algorithm DCPPA takes the following form: Compute w k h( k ) and set y k = ep k( w k ). Define k+1 M as the solution of the following variational inequality problem: ep 1 k+1 y k, ep 1 k+1 y 0, y C. Eistence and uniqueness theorems for variational inequalities on Hadamard manifolds can be found, for instance in [11], [13]. Acknowledgements The authors wish to epress their gratitude to the anonymous referee for his helpful comments. References 1. Martinet, B., Regularisation d inéquations variationelles par approimations succesives, Rev. Franaise d Inform. Recherche Oper., 4, (1970) 2. Moreau, J.J., Proimité et dualité dans un espace Hilbertien, Bull. Soc. Math. France, 93, (1965) 3. Rockafellar, R.T., Monotone operators and the proimal point algorithm, SIAM J. control. optim., 14, (1976) 4. Bento, G.C., Ferreira, O.P., Oliveira, P.R., Local convergence of the proimal point method for a special class of nonconve functions on Hadamard manifolds. Nonlinear Anal. 73, (2010) 5. Papa Quiroz, E.A.., Oliveira, P.R., Proimal point method for minimization quasiconve locally Lipschitz functions on Hadamard manifolds. Nonlinear Anal. 75, (2012) 6. Ferreira, O.P., Oliveira, P.R., Proimal point algorithm on Riemannian manifolds, Optimization, (2002) 7. da Cruz Neto, J.X., Ferreira, O.P., Lucambio Pérez, L.R., Németh, S.Z.: Conve-and monotone- transformable mathematical programming problems and a proimal-like point algorithm. J. Glob. Optim. 35, (2006) 8. Ferreira, O.P., Oliveira, P.R.: Subgradient algorithm on Riemannian manifolds. J. Optim. Theory Appl. 97, (1998) 9. da Cruz Neto, J.X., de Lima, L.L., Oliveira, P.R.: Geodesic algorithms in Riemannian geometry. Balk. J. Geom. Appl. 3, (1998)

14 14 J.C.O. Souza, P.R. Oliveira 10. Kritály, A.: Nash-type equilibria on Riemannian manifolds: a variational approach. J. Math. Pures Appl. 101, (2014) 11. Li, C., López, G., Martín-Márquez, V., Monotone vector fields and the proimal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 79, (2009) 12. Li, S.L., Li, C., Liou, Y.C., Yao, J.C., Eistence of solutions for variational inequalities on Riemannian manifolds. Nonlinear Anal. 71, (2009) 13. Németh, S.Z., Variational inequalities on Hadamard manifolds. Nonlinear Anal. 52, (2003) 14. Wang, J.H., López, G., Martín-Márquez, V., Li, C., Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 146, (2010) 15. Li, C., Wang, J.H., Newton s method for sections on Riemannian manifolds: generalized covariant α-theory. J. Comple. 24, (2008) 16. Li, C., Mordukhovich, B.S., Wang, J.H., Yao, J.C., Weak sharp minima on Riemannian manifolds. SIAM J. Optim. 21(4), (2011) 17. Bento, G.C., Ferreira, O.P., Oliveira, P.R., Unconstrained steepest descent method for multicriteria optimization on Riemannian manifolds, J. Optim. Theory Appl., 154, (2012) 18. Li, C., Yao, J.C., Variational inequalities for set-valued vector fields on Riemannian manifolds: conveity of the solution set and the proimal point algorithm. SIAM J. Control Optim. 50(4), (2012) 19. Li, C., López, G., Wang, J.H., Yao, J.C., Convergence analysis of ineact proimal point algorithms on Hadamard manifolds, J Glob Optim, DOI /s (2014) 20. Huang, N., Tang, G., An ineact proimal point algorithm for maimal monotone vector fields on Hadamard manifolds, Operations Research Letters, 41, (2013) 21. Absil, P.A., Baker, C.G., Trust-region methods on Riemannian manifolds, Found. Comput. Math., 7, (2007) 22. Adler, R.L., Dedieu, J.P., Margulies, J.Y., Martens, M., Shub, M., Newton s method on Riemannian manifolds and a geometric model for the human spine, IMA Journal of Numerical Analysis, 22, (2002) 23. Lee, P.Y., Geometric Optimization for Computer Vision, PhD thesis, Australian National University (2005) 24. Riddell, R.C., Minima problems on Grassmann manifolds. Sums of eigenvalues, Advances in Mathematics, 54, (1984) 25. Hiriart-Urruty, J.B., From conve optimization to nonconve optimization: necessary and sufficient conditions for global optimization. Nonsmooth Optimization and Related Topics, Springer US, (1989) 26. Hiriart-Urruty, J.B., Generalized differentiabity, duality and optimization for problems dealing with difference of conve functions. Conveity and Duality in Optimization, Springer Berlin Heidelberg, (1985) 27. Hiriart-Urruty, J.B., Tuy, H., Essays on nonconve optimization, Mathematical Programming, 41, North-Holland (1988) 28. Elhilali Alaoui, A. Caractrisation des fonctions D.C. (Characterization of D. C. functions), Ann. Sci. Math. Qu. 20, No.1, 1 13 (1996) 29. Toland, J.F., Duality in nonconve optimization, Journal of Mathematical Analysis and Applications, 66, (1978) 30. Sun, W., Sampaio, R.J.B., Candido, M.A.B., Proimal point algorithm for minimization of DC Functions, Journal of Computational Mathematics, 21, (2003) 31. Moudafi, A., Maing, P-E., On the convergence of an approimate proimal method for d.c. functions, Journal of Computational Mathematics, 24, (2006) 32. do Carmo, M.P.: Riemannian Geometry. Birkhauser, Boston (1992) 33. Sakai, T.: Riemannian Geometry. Translations of Mathematical Monographs, 149, American Mathematical Soc., Providence (1996) 34. Udriste, C., Conve Functions and Optimization Algorithms on Riemannian Manifolds. Mathematics and Its Applications, 297, Kluwer Academic, Dordrecht (1994) 35. Polyak, B. T., Sharp Minima., Institute of Control Sciences Lecture Notes, Moscow, USSR, 1979; Presented at the IIASA Workshop on Generalized Lagrangians and Their Applications, IIASA, Laenburg, Austria (1979) 36. Ferris, M. C., Weak Sharp Minima and Penalty Functions in Mathematical Programming, Ph.D. Thesis, University of Cambridge, UK (1988) 37. Bento G.C., Cruz Neto, J.X., Finite Termination of the Proimal Point Method for Conve Functions on Hadamard Manifolds, Optimization, 63, (2014)

Monotone Point-to-Set Vector Fields

Monotone Point-to-Set Vector Fields Monotone Point-to-Set Vector Fields J.X. da Cruz Neto, O.P.Ferreira and L.R. Lucambio Pérez Dedicated to Prof.Dr. Constantin UDRIŞTE on the occasion of his sixtieth birthday Abstract We introduce the concept

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

Variational inequalities for set-valued vector fields on Riemannian manifolds

Variational inequalities for set-valued vector fields on Riemannian manifolds Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /

More information

Generalized vector equilibrium problems on Hadamard manifolds

Generalized vector equilibrium problems on Hadamard manifolds Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1402 1409 Research Article Generalized vector equilibrium problems on Hadamard manifolds Shreyasi Jana a, Chandal Nahak a, Cristiana

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information

2 Preliminaries. is an application P γ,t0,t 1. : T γt0 M T γt1 M given by P γ,t0,t 1

2 Preliminaries. is an application P γ,t0,t 1. : T γt0 M T γt1 M given by P γ,t0,t 1 804 JX DA C NETO, P SANTOS AND S SOUZA where 2 denotes the Euclidean norm We extend the definition of sufficient descent direction for optimization on Riemannian manifolds and we relax that definition

More information

A splitting minimization method on geodesic spaces

A splitting minimization method on geodesic spaces A splitting minimization method on geodesic spaces J.X. Cruz Neto DM, Universidade Federal do Piauí, Teresina, PI 64049-500, BR B.P. Lima DM, Universidade Federal do Piauí, Teresina, PI 64049-500, BR P.A.

More information

Steepest descent method on a Riemannian manifold: the convex case

Steepest descent method on a Riemannian manifold: the convex case Steepest descent method on a Riemannian manifold: the convex case Julien Munier Abstract. In this paper we are interested in the asymptotic behavior of the trajectories of the famous steepest descent evolution

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Characterization of lower semicontinuous convex functions on Riemannian manifolds

Characterization of lower semicontinuous convex functions on Riemannian manifolds Wegelerstraße 6 53115 Bonn Germany phone +49 228 73-3427 fax +49 228 73-7527 www.ins.uni-bonn.de S. Hosseini Characterization of lower semicontinuous convex functions on Riemannian manifolds INS Preprint

More information

Two-Step Methods for Variational Inequalities on Hadamard Manifolds

Two-Step Methods for Variational Inequalities on Hadamard Manifolds Appl. Math. Inf. Sci. 9, No. 4, 1863-1867 (2015) 1863 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/090424 Two-Step Methods for Variational Inequalities

More information

Unconstrained Steepest Descent Method for Multicriteria Optimization on Riemannian Manifolds

Unconstrained Steepest Descent Method for Multicriteria Optimization on Riemannian Manifolds J Optim Theory Appl (2012) 154:88 107 DOI 10.1007/s10957-011-9984-2 Unconstrained Steepest Descent Method for Multicriteria Optimization on Riemannian Manifolds G.C. Bento O.P. Ferreira P.R. Oliveira Received:

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

About the Convexity of a Special Function on Hadamard Manifolds

About the Convexity of a Special Function on Hadamard Manifolds Noname manuscript No. (will be inserted by the editor) About the Convexity of a Special Function on Hadamard Manifolds Cruz Neto J. X. Melo I. D. Sousa P. A. Silva J. P. the date of receipt and acceptance

More information

Monotone and Accretive Vector Fields on Riemannian Manifolds

Monotone and Accretive Vector Fields on Riemannian Manifolds Monotone and Accretive Vector Fields on Riemannian Manifolds J.H. Wang, 1 G. López, 2 V. Martín-Márquez 3 and C. Li 4 Communicated by J.C. Yao 1 Corresponding author, Department of Applied Mathematics,

More information

Generalized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds

Generalized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds Generalized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds A. Barani 1 2 3 4 5 6 7 Abstract. In this paper we establish connections between some concepts of generalized

More information

How to Reach his Desires: Variational Rationality and the Equilibrium Problem on Hadamard Manifolds

How to Reach his Desires: Variational Rationality and the Equilibrium Problem on Hadamard Manifolds How to Reach his Desires: Variational Rationality and the Equilibrium Problem on Hadamard Manifolds G. C. Bento J. X. Cruz Neto Soares Jr. P.A. A. Soubeyran June 15, 2015 Abstract In this paper we present

More information

Local Convergence Analysis of Proximal Point Method for a Special Class of Nonconvex Functions on Hadamard Manifolds

Local Convergence Analysis of Proximal Point Method for a Special Class of Nonconvex Functions on Hadamard Manifolds Local Convergence Analysis of Proximal Point Method for a Special Class of Nonconvex Functions on Hadamard Manifolds Glaydston de Carvalho Bento Universidade Federal de Goiás Instituto de Matemática e

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds

Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds J. Math. Anal. Appl. 341 (2008) 467 477 www.elsevier.com/locate/jmaa Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds E.A. Papa Quiroz, E.M. Quispe,

More information

Invariant monotone vector fields on Riemannian manifolds

Invariant monotone vector fields on Riemannian manifolds Nonlinear Analsis 70 (2009) 850 86 www.elsevier.com/locate/na Invariant monotone vector fields on Riemannian manifolds A. Barani, M.R. Pouraevali Department of Mathematics, Universit of Isfahan, P.O.Box

More information

Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds

Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds N. Baygorrea, E.A. Papa Quiroz and N. Maculan Federal University of Rio de Janeiro COPPE-PESC-UFRJ PO Box 68511, Rio de

More information

DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17

DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17 DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17 6. Geodesics A parametrized line γ : [a, b] R n in R n is straight (and the parametrization is uniform) if the vector γ (t) does not depend on t. Thus,

More information

Steepest descent method on a Riemannian manifold: the convex case

Steepest descent method on a Riemannian manifold: the convex case Steepest descent method on a Riemannian manifold: the convex case Julien Munier To cite this version: Julien Munier. Steepest descent method on a Riemannian manifold: the convex case. 2006.

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Proximal Point Method for a Class of Bregman Distances on Riemannian Manifolds

Proximal Point Method for a Class of Bregman Distances on Riemannian Manifolds Proximal Point Method for a Class of Bregman Distances on Riemannian Manifolds E. A. Papa Quiroz and P. Roberto Oliveira PESC-COPPE Federal University of Rio de Janeiro Rio de Janeiro, Brazil erik@cos.ufrj.br,

More information

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES Shih-sen Chang 1, Ding Ping Wu 2, Lin Wang 3,, Gang Wang 3 1 Center for General Educatin, China

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Strictly convex functions on complete Finsler manifolds

Strictly convex functions on complete Finsler manifolds Proc. Indian Acad. Sci. (Math. Sci.) Vol. 126, No. 4, November 2016, pp. 623 627. DOI 10.1007/s12044-016-0307-2 Strictly convex functions on complete Finsler manifolds YOE ITOKAWA 1, KATSUHIRO SHIOHAMA

More information

CSC 576: Gradient Descent Algorithms

CSC 576: Gradient Descent Algorithms CSC 576: Gradient Descent Algorithms Ji Liu Department of Computer Sciences, University of Rochester December 22, 205 Introduction The gradient descent algorithm is one of the most popular optimization

More information

Refined optimality conditions for differences of convex functions

Refined optimality conditions for differences of convex functions Noname manuscript No. (will be inserted by the editor) Refined optimality conditions for differences of convex functions Tuomo Valkonen the date of receipt and acceptance should be inserted later Abstract

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Generalized Monotonicities and Its Applications to the System of General Variational Inequalities

Generalized Monotonicities and Its Applications to the System of General Variational Inequalities Generalized Monotonicities and Its Applications to the System of General Variational Inequalities Khushbu 1, Zubair Khan 2 Research Scholar, Department of Mathematics, Integral University, Lucknow, Uttar

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD OGANEDITSE A. BOIKANYO AND GHEORGHE MOROŞANU Abstract. This paper deals with the generalized regularization proximal point method which was

More information

Chapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves

Chapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves Chapter 3 Riemannian Manifolds - I The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves embedded in Riemannian manifolds. A Riemannian manifold is an abstraction

More information

Bregman Divergence and Mirror Descent

Bregman Divergence and Mirror Descent Bregman Divergence and Mirror Descent Bregman Divergence Motivation Generalize squared Euclidean distance to a class of distances that all share similar properties Lots of applications in machine learning,

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

LECTURE 16: CONJUGATE AND CUT POINTS

LECTURE 16: CONJUGATE AND CUT POINTS LECTURE 16: CONJUGATE AND CUT POINTS 1. Conjugate Points Let (M, g) be Riemannian and γ : [a, b] M a geodesic. Then by definition, exp p ((t a) γ(a)) = γ(t). We know that exp p is a diffeomorphism near

More information

Newton Method on Riemannian Manifolds: Covariant Alpha-Theory.

Newton Method on Riemannian Manifolds: Covariant Alpha-Theory. Newton Method on Riemannian Manifolds: Covariant Alpha-Theory. Jean-Pierre Dedieu a, Pierre Priouret a, Gregorio Malajovich c a MIP. Département de Mathématique, Université Paul Sabatier, 31062 Toulouse

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Hopf-Rinow and Hadamard Theorems

Hopf-Rinow and Hadamard Theorems Summersemester 2015 University of Heidelberg Riemannian geometry seminar Hopf-Rinow and Hadamard Theorems by Sven Grützmacher supervised by: Dr. Gye-Seon Lee Prof. Dr. Anna Wienhard Contents Introduction..........................................

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Examples of Convex Functions and Classifications of Normed Spaces

Examples of Convex Functions and Classifications of Normed Spaces Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University

More information

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN

More information

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:

More information

Lecture 23: November 19

Lecture 23: November 19 10-725/36-725: Conve Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 23: November 19 Scribes: Charvi Rastogi, George Stoica, Shuo Li Charvi Rastogi: 23.1-23.4.2, George Stoica: 23.4.3-23.8, Shuo

More information

arxiv: v1 [math.na] 29 Oct 2010

arxiv: v1 [math.na] 29 Oct 2010 arxiv:1011.0010v1 [math.na] 29 Oct 2010 Unconstrained steepest descent method for multicriteria optimization on Riemmanian manifolds G. C. Bento O. P. Ferreira P. R. Oliveira October 29, 2010 Abstract

More information

Complexity of gradient descent for multiobjective optimization

Complexity of gradient descent for multiobjective optimization Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization

More information

On an iterative algorithm for variational inequalities in. Banach space

On an iterative algorithm for variational inequalities in. Banach space MATHEMATICAL COMMUNICATIONS 95 Math. Commun. 16(2011), 95 104. On an iterative algorithm for variational inequalities in Banach spaces Yonghong Yao 1, Muhammad Aslam Noor 2,, Khalida Inayat Noor 3 and

More information

Maximal monotone operators, convex functions and a special family of enlargements.

Maximal monotone operators, convex functions and a special family of enlargements. Maximal monotone operators, convex functions and a special family of enlargements. Regina Sandra Burachik Engenharia de Sistemas e Computação, COPPE UFRJ, CP 68511, Rio de Janeiro RJ, 21945 970, Brazil.

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

The Strong Convergence of Subgradients of Convex Functions Along Directions: Perspectives and Open Problems

The Strong Convergence of Subgradients of Convex Functions Along Directions: Perspectives and Open Problems J Optim Theory Appl (2018) 178:660 671 https://doi.org/10.1007/s10957-018-1323-4 FORUM The Strong Convergence of Subgradients of Convex Functions Along Directions: Perspectives and Open Problems Dariusz

More information

A characterization of essentially strictly convex functions on reflexive Banach spaces

A characterization of essentially strictly convex functions on reflexive Banach spaces A characterization of essentially strictly convex functions on reflexive Banach spaces Michel Volle Département de Mathématiques Université d Avignon et des Pays de Vaucluse 74, rue Louis Pasteur 84029

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

SYNGE-WEINSTEIN THEOREMS IN RIEMANNIAN GEOMETRY

SYNGE-WEINSTEIN THEOREMS IN RIEMANNIAN GEOMETRY SYNGE-WEINSTEIN THEOREMS IN RIEMANNIAN GEOMETRY AKHIL MATHEW Abstract. We give an exposition of the proof of a few results in global Riemannian geometry due to Synge and Weinstein using variations of the

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

Stability of efficient solutions for semi-infinite vector optimization problems

Stability of efficient solutions for semi-infinite vector optimization problems Stability of efficient solutions for semi-infinite vector optimization problems Z. Y. Peng, J. T. Zhou February 6, 2016 Abstract This paper is devoted to the study of the stability of efficient solutions

More information

PROJECTIONS ONTO CONES IN BANACH SPACES

PROJECTIONS ONTO CONES IN BANACH SPACES Fixed Point Theory, 19(2018), No. 1,...-... DOI: http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html PROJECTIONS ONTO CONES IN BANACH SPACES A. DOMOKOS AND M.M. MARSH Department of Mathematics and Statistics

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Math 273a: Optimization Lagrange Duality

Math 273a: Optimization Lagrange Duality Math 273a: Optimization Lagrange Duality Instructor: Wotao Yin Department of Mathematics, UCLA Winter 2015 online discussions on piazza.com Gradient descent / forward Euler assume function f is proper

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Characterizations of the solution set for non-essentially quasiconvex programming

Characterizations of the solution set for non-essentially quasiconvex programming Optimization Letters manuscript No. (will be inserted by the editor) Characterizations of the solution set for non-essentially quasiconvex programming Satoshi Suzuki Daishi Kuroiwa Received: date / Accepted:

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

LECTURE 10: THE PARALLEL TRANSPORT

LECTURE 10: THE PARALLEL TRANSPORT LECTURE 10: THE PARALLEL TRANSPORT 1. The parallel transport We shall start with the geometric meaning of linear connections. Suppose M is a smooth manifold with a linear connection. Let γ : [a, b] M be

More information

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces Emil

More information

A maximum principle for optimal control system with endpoint constraints

A maximum principle for optimal control system with endpoint constraints Wang and Liu Journal of Inequalities and Applications 212, 212: http://www.journalofinequalitiesandapplications.com/content/212/231/ R E S E A R C H Open Access A maimum principle for optimal control system

More information

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds da Cruz Neto, J. X., Ferreira, O. P., Oliveira, P. R. and Silva, R. C. M. February

More information

A crash course the geometry of hyperbolic surfaces

A crash course the geometry of hyperbolic surfaces Lecture 7 A crash course the geometry of hyperbolic surfaces 7.1 The hyperbolic plane Hyperbolic geometry originally developed in the early 19 th century to prove that the parallel postulate in Euclidean

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

On pseudomonotone variational inequalities

On pseudomonotone variational inequalities An. Şt. Univ. Ovidius Constanţa Vol. 14(1), 2006, 83 90 On pseudomonotone variational inequalities Silvia Fulina Abstract Abstract. There are mainly two definitions of pseudomonotone mappings. First, introduced

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F. AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised July 8, 1999) ABSTRACT We present a

More information

Riemannian geometry of surfaces

Riemannian geometry of surfaces Riemannian geometry of surfaces In this note, we will learn how to make sense of the concepts of differential geometry on a surface M, which is not necessarily situated in R 3. This intrinsic approach

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Self-contracted curves in Riemannian manifolds

Self-contracted curves in Riemannian manifolds Self-contracted curves in Riemannian manifolds A. Daniilidis, R. Deville, E. Durand-Cartagena, L. Rifford Abstract It is established that every self-contracted curve in a Riemannian manifold has finite

More information

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Global Maximum of a Convex Function: Necessary and Sufficient Conditions Journal of Convex Analysis Volume 13 2006), No. 3+4, 687 694 Global Maximum of a Convex Function: Necessary and Sufficient Conditions Emil Ernst Laboratoire de Modélisation en Mécaniue et Thermodynamiue,

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

Received May 20, 2009; accepted October 21, 2009

Received May 20, 2009; accepted October 21, 2009 MATHEMATICAL COMMUNICATIONS 43 Math. Commun., Vol. 4, No., pp. 43-44 9) Geodesics and geodesic spheres in SL, R) geometry Blaženka Divjak,, Zlatko Erjavec, Barnabás Szabolcs and Brigitta Szilágyi Faculty

More information