Proximal Point Method for a Class of Bregman Distances on Riemannian Manifolds

Size: px
Start display at page:

Download "Proximal Point Method for a Class of Bregman Distances on Riemannian Manifolds"

Transcription

1 Proximal Point Method for a Class of Bregman Distances on Riemannian Manifolds E. A. Papa Quiroz and P. Roberto Oliveira PESC-COPPE Federal University of Rio de Janeiro Rio de Janeiro, Brazil erik@cos.ufrj.br, poliveir@cos.ufrj.br September 2005 Abstract This paper generalizes the proximal point method using a class of Bregman distances to solve convex and quasiconvex optimization problems on complete Riemannian manifolds. We will prove, under standard assumptions, that the sequence generated by our method is well defined and converges to an optimal solution of the problem. Also, we give some examples of Bregman distances in non-euclidean spaces. Keywords: Proximal point algorithms; Riemannian manifolds; Bregman distances; diagonal Riemannian metrics.

2 1 Introduction Let consider the problem min f(x), x X where f : IR n IR is a convex function on a closed totally convex set X of IR n. The proximal point algorithm with Bregman distance, henceforth abreviated PBD algorithm, generates a sequence {x k } defined by Given x 0 S, x k = arg min {f(x) + λ k D h (x, x k 1 )}, (1.1) x X S where h is a Bregman function with zone S, such that X S φ, λ k is a positive parameter and D h is a Bregman distance defined as D h (x, y) = h(x) h(y) f(x), x y, where, denotes the usual inner product on IR n. Convergence and rate of convergence results, under appropriate assumptions on the problem, have been proved by several authors for certain choices of the regularization parameters λ k, (see, for example, [2, 4, 14, 15]). That algorithm has also been generalized for variational inequalities problems in Hilbert and Banach spaces, see [1, 13]. Variational Inequalities Problems arise naturally in several Engineering applications and recover optimization problems as a particular case. On the other hand, generalization of known methods in optimization from Euclidean space to Riemannian manifolds is in a certain sense natural, and advantageous in some cases. For example, we can consider the intrinsic geometry of the manifold, and constrained problems can be seen as unconstrained ones. Another application is that certain non convex optimization problems become convex through the introduction of an adequate Riemannian metric on the manifold, so we can use more efficient optimization techniques, see [8, 7, 11, 16, 20, 23, 24], and the references therein. Another advantage of that approach is that we can use Riemannian metric to introduce new algorithms in interior point methods, (see, for example, [21, 5, 9, 19]). In this paper we generalize the PBD algorithm to solve optimization problems on complete Riemannian manifolds. Our approach is new and it is related to the work of Ferreira and Oliveira [12]. In that paper, the authors have generalized the proximal point method using the intrinsic Riemannian distance for Hadamard manifolds. Here, we consider the following regularization parameters lim λ k = 0, with λ k > 0, (1.2) k + for the quasiconvex case, and 0 < λ k < λ, (1.3) for the convex one. For λ k satisfying (1.2), we obtain the convergence of the PBD algorithm, for quasiconvex functions, with no assumptions on the Bregman function.for λ k satisfying (1.3), we obtain the convergence of the PBD algorithm using a certain class of Bregman functions. That class satisfies either a Lipschitz condition on its gradient or a relation between the gradient and the geodesic triangle of the manifold. In particular, the second condition is satisfied by any Bregman function in IR n. The paper is divided as follows. In Section 2 we give the notation on Riemannian geometry that we will use along the paper. In Section 3, we recall some facts on convex analysis on Riemannian manifolds. In Section 4 the definition of Bregman function is introduced, besides 2

3 some properties. Section 5 presents the Moreau-Yosida regularization, by considering Bregman distances. In Section 6 we introduce the PDB algorithm to solve minimization problems on Riemannian manifolds and we prove the convergence of the sequence generated by the algorithm. In Section 7 are presented some explicit examples of Bregman distances in non Euclidean spaces, in the following section we give our conclusions and future works. An appendix with some new geodesic curves for diagonal metrics is also presented. 2 Some Tools of Riemannian Geometry In this section we recall some fundamental properties and notation on Riemannian manifolds. Those basic facts can be seen, for example, in [10]. Let M be a differential manifold. We denote by T x M the tangent space of M at x and T M = x M T xm. T x M is a linear space and has the same dimension of M. Because we restrict ourselves to real manifolds, T x M is isomorphic to IR n. If M is endowed with a Riemannian metric g, then M is a Riemannian manifold and we denoted it by (M, G) or only by M when no confusion can arise, where G denotes the matrix representation of the metric g. The inner product of two vectors u, v T x S is written u, v x := g x (u, v), where g x is the metric at the point x. The norm of a vector v T x S is defined by v x := v, v x 1/2. The metric can be used to define the length of a piecewise smooth curve α : [t 0, t 1 ] S joining α(t 0 ) = p to α(t 1 ) = p through L(α) = t 1 t 0 α (t) dt. Minimizing this length functional over the set of all curves we obtain a Riemannian distance d(p, p) which induces the original topology on M. Given two vector fields V and W in M (a vector field V is an application of M in T M), the covariant derivative of W in the direction V is denoted by V W. In this paper is the Levi-Civita connection associated to (M, G). This connection defines an unique covariant derivative D/dt, where for each vector field V, along a smooth curve α : [t 0, t 1 ] M, another vector field is obtained, denoted by DV/dt. The parallel transport along α from α(t 0 ) to α(t 1 ), denoted by P α,t0,t 1, is an application P α,t0,t 1 : T α(t0 )M T α(t1 )M defined by P α,t0,t 1 (v) = V (t 1 ) where V is the unique vector field along α such that DV/dt = 0 and V (t 0 ) = v. Since that is a Riemannian connection, P α,t0,t 1 P α,t0,t 1 = P α,t,t1 P α,t0,t, for all t [t 0, t 1 ]. A curve γ : [0, 1] M starting from x on the direction v T p M is called a geodesic if d 2 γ k dt 2 + n i,j=1 where γ i are the coordinates of γ and Γ k ij Γ m ij = 1 2 n k=1 is a linear isometry, furthermore P 1 α,t 0,t 1 = P α,t1,t 0 and Γ k dγ i dγ j ij dt dt γ(0) = x and γ (0) = v, { x i g jk + = 0, k = 1,..., n, (2.4) are the Christoffel s symbols expressed by x j g ki } g ij g km, (2.5) x k (g ij ) denotes the inverse matrix of the metric G = (g ij ), and x i are the coordinates of x. A Riemannian manifold is complete if its geodesics are defined for any value of t IR. Let x M, the exponential map exp x : T x M M, defined as exp x (v) = γ(1). If M is complete, then exp x is defined for all v T x M. Besides, there is an unique minimal geodesic (its length is equal to the distance between the extremes). 3

4 Given the vector fields X, Y, Z on M, we denote by R the curvature tensor defined by R(X, Y )Z = Y X Z X Y Z + [X,Y ] Z, where [X, Y ] := XY Y X is the Lie bracket. Now, the sectional curvature with respect to X and Y is defined by K(X, Y ) = R(X, Y )Y, X X 2 Y 2 X, Y 2. The gradient of a differentiable function f : M IR, gradf, is a vector field on M defined through df(x) = gradf, X = X(f), where X is also a vector field on M. 3 Convex Analysis on Riemannian Manifolds In this section we give some definitions and results of Convex Analysis on Riemannian manifolds. We refer the reader to [24] for more details and the proofs of the theorems. Definition 3.1 Let M be a Riemannian manifold. A subset A is said totally convex in M if, for any pair of points all geodesics joining these points are contained in A, that is, given x, y A and γ : [0, 1] M, the geodesic curve such that γ(0) = x, γ(1) = y verifies γ(t) A, for all t [0, 1]. Definition 3.2 Let A be a totally convex set in a complete Riemannian manifold M and f : A IR be a real-valued function. f is called a quasiconvex function on A if for all x, y A, t [0, 1], it holds that f(γ(t)) max{f(x), f(y)}, for any geodesic γ : [0, 1] IR, such that γ(0) = x and γ(1) = y. Theorem 3.1 Let A be a totally convex set in a complete Riemannian manifold M. The function f : A IR is quasiconvex if and only if the set {x A : f(x) c} is totally convex for each c IR. Definition 3.3 Let A be a totally convex set in a complete Riemannian manifold M and f : A IR be a real-valued function. f is called a convex function on A if for all x, y A, t [0, 1], it holds that f(γ(t)) tf(y) + (1 t)f(x), for any geodesic γ : [0, 1] IR, such that γ(0) = x and γ(1) = y. When the preceding inequality is strict, for x y and t (0, 1) the function f is said to be strictly convex. Theorem 3.2 Let M be a complete Riemannian manifold and A be a totally convex set in M. The function f : A IR is convex if and only if x, y A and γ : [0, 1] M (the geodesic joining x to y) the function f(γ(t)) is convex in [0, 1]. Proof. See [24]. Let M be a complete Riemannian manifold and let f : M IR be a convex function. Take y M, the vector s T y M is said to be a subgradient of f at y if, for any geodesic γ of M, with γ(0) = y, f(γ(t)) f(y) + t s, γ (0) y, (3.6) for all x M. The set of all subgradients of f at y is called the subdifferential of f at y and is denoted by f(y). 4

5 Theorem 3.3 Let M be a complete Riemannian manifold and let f : M IR be a convex function. Then, for any y M, there exists s T y M such that x M (3.6) is true. From previous theorem the subdifferential f(x) of a convex function f at x M is nonempty. Theorem 3.4 Let M be a complete Riemannian manifold and f : M IR be a convex function. Then 0 f(x) if and only if x is a minimum point of f in M. 4 Bregman Distances and Functions on Riemannian Manifolds To construct generalized proximal point algorithms with Bregman distances for solving optimization problems on complete Riemannian manifolds, it is necessary to extend the definitions of Bregman distances and Bregman functions to that framework. Starting from Censor and Lent [3] definition, we propose the following. Let M be a complete Riemannian manifold and S a nonempty open totally convex set of M with a topological closure S. Let h : M IR be a strictly convex function on S and differentiable in S. From now on, the exponential map is associated to the minimal geodesic. Then, the Bregman distance associated to h, denoted by D h, is defined as a function D h (.,.) : S S IR such that D h (x, y) := h(x) h(y) gradh(y), exp 1 y x y. (4.7) Notice that the expression of the Bregman distance depends on the definition of the metric. Some examples for different manifolds will be given in Section 7. Let us adopt the following notation for the partial level sets of D h. For α IR, take Γ 1 (α, y) := {x S : D h (x, y) α}, Γ 2 (x, α) := {y S : D h (x, y) α}. Definition 4.1 Let M be a complete Riemannian manifold. A real-valued function h : M IR is called a Bregman function if there exists a nonempty open totally convex set S such that a. h is continuous on S; b. h is strictly convex on S; c. h is continuously differentiable in S; d. For all α IR the partial level sets Γ 1 (α, y) and Γ 2 (x, α) are bounded for every y S and x S, respectively; e. If lim k y k = y S, then lim k D h (y, y k ) = 0; f. If lim k D h (z k, y k ) = 0, lim k y k = y S and {z k } is bounded then lim k z k = y. We denote the family of Bregman functions by B and refer to the set S as the zone of the function h. Lemma 4.1 Let h B with zone S. Then i. grad D h (., y)(x) = grad h(x) P γ,0,1 grad h(y), for all x, y S, where γ : [0, 1] M is the minimal geodesic curve such that γ(0) = y and γ(1) = x. 5

6 ii. D h (., y) is strictly convex on S for all y S. iii. For all x S and y S, D h (x, y) 0 and D h (x, y) = 0 if and only if x = y. Proof. i. From definition (4.7), we have p(t) := D h (γ(t), γ(0)) = h(γ(t)) h(γ(0)) t grad h(γ(0)), γ (0) γ(0). Derivation with respect to t gives p (t) = grad h(γ(t)), γ (t) γ(t) grad h(γ(0)), γ (0) γ(0). Using the fact that P γ,0,t γ (0) = γ (t) we obtain On the other hand, p (t) = grad h(γ(t)) P γ,0,t grad h(γ(0)), γ (t) γ(t). (4.8) p (t) = d dt (D h(γ(t), γ(0))) = d(d h (., γ(0))) γ(t) (γ (t)) = grad D h (., γ(0))(γ(t)), γ (t) γ(t). From (4.8) and (4.9) follows Taking t = 1 gives grad D h (., γ(0))(γ(t)) = grad h(γ(t)) P γ,0,t grad h(γ(0)). gradd h (., y)(x) = grad h(x) P γ,0,1 grad h(y). ii. As h is a strictly convex function on S and grad h(y), exp 1 y x y is a linear function in x M, then D h (., y) is strictly convex on S. iii. Use, again, the strict convexity of h. Observe that D h is not a distance in the usual sense of the term. In general, the triangular inequality is not valid, as the symmetry property. From now on, we use the notation grad D h (x, y) to mean grad D h (., y)(x). So, if γ is the minimal geodesic curve such that γ(0) = y and γ(1) = x, from Lemma 4.1, i, we obtain gradd h (x, y) = grad h(x) P γ,0,1 grad h(y). Definition 4.2 Let Ω M, S an open totally convex set, and let y S. A point P y Ω S for which D h (P y, y) = min x Ω S D h (x, y) (4.10) is called a D h projection of the point y on the set Ω. The next Lemma furnishes the existence and uniqueness of D h projection for a Bregman function, under an appropriate assumption on Ω, Lemma 4.2 Let Ω M a closed totally convex set, and h B with zone S. If Ω S φ then, for any y S, there exists a unique D h projection P y on Ω. (4.9) 6

7 Proof. For any x Ω S, the set B := {z S : D h (z, y) D h (x, y)} is bounded (from Definition 4.1, d) and closed (because D h (., y) is continuous in S, due to Definition 4.1, a). Therefore, the set T := (Ω S) B is nonempty, because x B Ω, and bounded. Now, as the intersection of closed sets is closed, then T is also closed, hence compact. Consequently, D h (z, y), a continuous function in z, takes its minimum on the compact set T at some point, let denote it by x. For every z Ω S which lies outside B D h (x, y) < D h (z, y); hence, x satisfies (4.10). The uniqueness follows from the strict convexity of D h (., y), therefore x = P y. Lemma 4.3 Let h B with zone S and y S. Suppose that P y S, where P y is the D h projection on some closed totally convex set Ω such that Ω S φ. Then, the function is convex on S. Proof. From (4.7) g(x) := D h (x, y) D h (x, P y) D h (x, y) D h (x, P y) = h(p y) h(y) + gradh(p y), exp 1 P y x P y gradh(y), exp 1 y x y. Due to the linearity of the functions gradh(p y), exp 1 P y x P y and gradh(y), exp 1 y x y in x the result follows. Proposition 4.1 Let h B with zone S; S and Ω M are closed totally convex sets such that Ω S φ. Let y S and assume that P y S, where P y denotes the D h projection of y on Ω. Then, for any x Ω S, the following inequality is true D h (P y, y) D h (x, y) D h (x, P y). Proof. Let γ : [0, 1] M be the minimal geodesic curve such that γ(0) = P y and γ(1) = x. Due to Lemma 4.3 the function G(x) = D h (x, y) D h (x, P y) is convex on S. Then in particular G(γ(t)) is convex for t (0, 1) (see Theorem 3.2). Thus, which gives, G(γ(t)) tg(x) + (1 t)g(p y), D h (γ(t), y) D h (γ(t), P y) t(d h (x, y) D h (x, P y)) + D h (P y, y) td h (P y, y), 7

8 where we used the fact that D h (P y, P y) = 0. The above inequality is equivalent to (1/t)(D h (γ(t), y) D h (P y, y)) (1/t)D h (γ(t), P y) D h (x, y) D h (x, P y) D h (P y, y). (4.11) As Ω S is totally convex, and x, P y Ω S, we have γ(t) Ω S for all t (0, 1). Then, use the fact that P y is the projection to get Using this inequality in (4.11) we obtain (1/t)(D h (γ(t), y) D h (P y, y)) 0. (1/t)D h (γ(t), P y) D h (x, y) D h (x, P y) D h (P y, y). Now, as D h (., z) is differentiable for all z S, we can take the limit in t, obtaining gradd h (P y, P y), exp 1 P y x P y D h (x, y) D h (x, P y) D h (P y, y). Clearly, the left side is null, leading to the aimed result. 5 Regularization Let M be a complete Riemannian manifold and f : X M IR a real-valued function. Let S an open totally convex set and h : S IR a differentiable function in S. For λ > 0, the Moreau-Yosida regularization f λ : S IR of f is defined by f λ (y) = inf {f(x) + λd h (x, y)} (5.12) x X S where D h (x, y) is given in (4.7). In order to prove that the function (5.12) is well defined, h and f should satisfy some conditions. Proposition 5.1 If f : X M IR is a bounded below quasiconvex and continuous function over a closed totally convex set X, and h B with zone S such that X S φ, then, for every y S and λ > 0 there exists a unique point, denoted by x f (y, λ), such that Proof. Let β a lower bound for f on X, then f λ (y) = f(x f (y, λ)) + λd h (x f (y, λ), y). (5.13) f(x) + λd h (x, y) β + λd h (x, y), for all x X S. It follows from Definition 4.1, d, that the level sets of the function f(.) + λd h (., y) are bounded. Also, this function is continuous on X S, due to Definition 4.1, a, and the hypothesis on f. So, the level sets of (f(.) + λd h (., y)) are closed, hence compact. Now, from continuity and compactness arguments, f(.) + λd h (., y) has a minimum on X S. Finally, the strict convexity of D h (., y), see Lemma 4.1, ii, together the quasiconvexity of f, lead to the uniqueness of x f (y, λ), the equality (5.13) following from (5.12). Now, we let a different condition on h, that also ensures that (5.12) is well defined. We consider the unconstrained case X = S = M, so (5.12) is reduced to Let us introduce the following definition. f λ (y) = inf x M {f(x) + λd h(x, y)}. 8

9 Definition 5.1 A function g : M IR is 1 coercive at y M if lim d(x,y) g(x) d(x, y) = +. Note that if g : M IR is a 1 coercive function at y M, then it is easy to show that the minimizer set of g on M is nonempty. Lemma 5.1 If f : M IR is bounded below, λ > 0, and h : M IR is 1 coercive at y M, then the function f(.) + λd h (., y) : M IR is 1 coercive at y M. Proof. As above, let β a lower bound for f. Then: f(x) + λd h (x, y) d(x, y) β d(x, y) + λd h(x, y) d(x, y) = β d(x, y) + λ h(x) d(x, y) λ h(y) d(x, y) λ gradh(y), exp 1 y x d(x, y) y β d(x,y) + λ h(x) d(x,y) λ h(y) d(x,y) λ gradh(y), where the equality comes from the definition of D h, and the last inequality results from the application of Cauchy inequality, and the fact that exp 1 y x = d(x, y). Taking d(x, y), we use the 1-coercivity assumption of h at y, to get (f(.) + λd h (., y))(x) lim = +. d(x,y) d(x, y) Proposition 5.2 Let h : M IR be a 1 coercive strictly convex function at y M, f : M IR a quasiconvex bounded below function. Then, there exists an unique point x f (y, λ) such that f λ (y) = f(x f (y, λ)) + λd h (x f (y, λ), y) Proof. The result follows from the Lemma above and, as in the proof of the preceding proposition, from the strict convexity of D h (., y) and the quasiconvexity of f. 6 Proximal Point Algorithm We are interested in solving the optimization problem: (p) min x M f(x) where M is a complete Riemannian manifold. The main convergence results will be given when f is a continuous quasiconvex or a convex function on M. The PBD algorithm is defined as x 0 M, (6.14) x k = arg min x M {f(x) + λ kd h (x, x k 1 )}, (6.15) 9

10 where h is a Bregman function with zone M, D h is as in (4.7) and λ k is a positive parameter. In the particular case where M is the Euclidean space IR n, and h(x) = (1/2)x T x, we have x k = arg min x IR n{f(x) + (λ k/2) x x k 1 2 }. Therefore, the PBD algorithm is a natural generalization of the proximal point algorithm on IR n. We will use the following parameter conditions to the PBD algorithm: 0 < λ k < λ, or (6.16) Note that (6.16) implies that considered in the following. lim λ k = 0, with λ k > 0. (6.17) k + k=1 (1/λ k ) = +. Next, we set some Hypothesis that wil be Assumption A1. The optimal set of the problem (p), denoted by X, is nonempty. Assumption A2. h B satisfies one of the following conditions: A2.1 gradh is a Lipschitz function on the bounded subsets of M, that is, for any x, y B, for any bounded subset B of M, it holds gradh(x) P γ,0,1 gradh(y) Ld(x, y), where L = L(B) > 0, γ(t) is the minimal geodesic curve such that γ(0) = y and γ(1) = x, and d is the Riemannian distance. A2.2 Given x, y, z M and letting γ : [0, 1] M the geodesic curve joining γ(0) = y and γ(1) = z, then h satisfies P γ,0,1 gradh(y), exp 1 z x exp 1 z y P γ,0,1 exp 1 y x z 0. Remark 1. The alternative Assumptions A2 will be used to prove that any limit point of the sequence {x k }, with λ k as in (6.16), is an optimal solution of the problem (p). When λ k verifies (6.17), we will show that Assumption A2 is not necessary to prove convergence of the PBD algorithm. On the other hand, it is worthwhile observe that when M is the Euclidean space IR n, any Bregman function satisfies the Assumption A2.2. Clearly, in the Euclidean space geodesic curves are straight lines, so Therefore, exp 1 z y + P γ,0,1 exp 1 y x = (y z) + (x y) = x z = exp 1 z x, P γ,0,1 gradh(y), exp 1 z Furthermore, that hypothesis implies x exp 1 z y P γ,0,1 exp 1 y x z = 0. D h (x, y) D h (z, y) D h (x, z) gradh(z) P γ,0,1 gradh(y), exp 1 z x z. (6.18) 10

11 Indeed, using the definition of D h, it gets D h (x, y) D h (z, y) D h (x, z) = gradh(y), exp 1 y z exp 1 y x y + grad h(z), exp 1 z x z Taking Parallel transport from γ(0) = y to γ(1) = z in the first term on the right side gives D h (x, y) D h (z, y) D h (x, z) = = P γ,0,1 grad h(y), exp 1 z y P γ,0,1 exp 1 y x z + grad h(z), exp 1 z x z, Now, add and subtract the term P γ,0,1 gradh(y), exp 1 z x z, to produce D h (x, y) D h (z, y) D h (x, z) = grad h(z) P γ,0,1 grad h(y), exp 1 z x z + P γ,0,1 gradh(y), exp 1 z x exp 1 z y P γ,0,1 exp 1 y x z. The aimed result is evident after using the assumption A2.2. Remark 2. In order to verify that h : M IR is a Bregman function, with zone M, it is sufficient to show that conditions a to d in Definition 4.1 are satisfied, as e and f are an immediate consequence of a, b, c and d. 6.1 Convergence Results In this subsection we prove the convergence of the proposed algorithm. Our results are motivated by the works [14, 4, 2] The quasiconvex case Theorem 6.1 Assume Assumption A1 and that f is a quasiconvex and continuous function. Then, the sequence {x k } generated by the PBD algorithm is well defined. Proof. The proof proceeds by induction. It holds for k = 0, due to (6.14). Assume that x k is well defined; from Proposition 5.1, for X = S = M, we have that x k+1 exists and it is unique. Theorem 6.2 Under the same hypothesis above, the sequence {x k }, generated by the PBD algorithm, is bounded. Proof. Since x k satisfies (6.15) we have f(x k ) + λ k D h (x k, x k 1 ) f(x) + λ k D h (x, x k 1 ), x M. (6.19) Hence, x M such that f(x) f(x k ) is true that D h (x k, x k 1 ) D h (x, x k 1 ). Therefore x k is the unique D h projection of x k 1 on the closed totally convex set (see Theorem 3.1) Ω := {x X : f(x) f(x k )}. 11

12 Using Proposition 4.1 and the fact that X Ω we have for every x X. Thus 0 D h (x k, x k 1 ) D h (x, x k 1 ) D h (x, x k ) (6.20) D h (x, x k ) D h (x, x k 1 ). (6.21) This means that {x k } is D h Fejér monotone with respect to set X. We can now apply Definition 4.1,d, to see that x k is bounded, because with α = D h (x, x 0 ). x k Γ 2 (x, α), Proposition 6.1 Under the assumptions of the precedent theorem, the following facts are true a. For all x X the sequence D h (x, x k ) is convergent; b. lim k D h (x k, x k 1 ) = 0; c. {f(x k )} is noincreasing; d. If lim j + x k j = x then, lim j + x k j+1 = x. Proof. a. From (6.21), {D h (x, x k )} is a bounded below nonincreasing sequence and hence convergent. b. Taking limit when k goes to infinity in (6.20) and using the previous result we obtain lim k D h (x k, x k 1 ) = 0, as desired. c. Considering x = x k 1 in (6.19) it follows that 0 D h (x k, x k 1 ) (1/λ k )(f(x k 1 ) f(x k )), (6.22) since D h (x k 1, x k 1 ) = 0. Thus {f(x k )} is noincreasing. d. Taking z k = x k j+1 and y k = x k j in Definition 4.1, f, we obtain the result. Theorem 6.3 Under Assumption A1 and that f is a quasiconvex and continuous function, any limit point of {x k } generated by the PBD algorithm with λ k satisfying (6.17) is an optimal solution of (p). Proof. Let x M be an optimal solution of ( p) and let x M be a cluster point of {x k } then, there exists a subsequence {x k j } such that As x k j This rewrites is a solution of (6.15) we have lim j xk j = x. f(x k j ) + λ kj D h (x k j, x k j 1 ) f(x ) + λ kj D h (x, x k j 1 ). λ kj (D h (x k j, x k j 1 ) D h (x, x k j 1 )) f(x ) f(x k j ). Using the differential characterization of convex functions for D h (., x k j 1 ) gives: λ kj gradd h (x, x k j 1 ), exp 1 x xk j x f(x ) f(x k j ). Taking, above, j and considering the hypothesis (6.17) we obtain f( x) f(x ). Therefore, any limit point is a solution of the problem (p). 12

13 Theorem 6.4 Under Assumption A1 and that f is a quasiconvex and continuous function, the sequence {x k } generated by the PBD algorithm, with λ k satisfying (6.17), converges to an optimal solution of (p). Proof. From Theorem 6.2 {x k } is bounded so there exists a convergent subsequence. Let {x k j } be a subsequence of {x k } such that lim j x k j = x. From Definition 4.1, e, it is true that lim D h(x, x k j ) = 0. j Now, from Theorem 6.3, x is an optimal solution of (p), so from Proposition 6.1, a, D h (x, x k ) is a convergent sequence, with the subsequence converging to 0, hence the overall sequence converges to 0, that is, lim k D h(x, x k ) = 0. To prove that {x k } has a unique limit point let x X be another limit point of {x k }. Then lim l D h (x, x k l) = 0 with lim l x k l = x. So, from Definition 4.1, f, x = x. It follows that {x k } cannot have more than one limit point and therefore, lim k + xk = x X The convex case Theorem 6.5 Suppose that assumptions A1 and A2.1 are satisfied, and that f is convex. If λ k satisfies (6.16) then, any limit point of {x k } is an optimal solution of the problem (p). Proof. Let x M be a limit point of {x k } then, there exists a subsequence {x k j } such that From (6.15) and Theorem 3.4 or, lim j xk j = x. 0 [f(.) + λ kj +1D h (., x k j )](x k j+1 ), λ kj +1gradD h (x k j+1, x k j ) f(x k j+1 ). Let γ kj be the geodesic curve such that γ kj (0) = x k j and γ kj (1) = x k j+1. By Lemma 4.1, i, we obtain λ kj +1[P γkj,0,1grad h(x k j ) grad h(x k j+1 )] f(x k j+1 ). Let x be an optimal solution of (p). Using (3.6) for x = x and y = x k j+1 we have where, On the other hand, from Cauchy inequality f(x ) f(x k j+1 ) y k j, exp 1 x k j +1 x x k j +1 (6.23) y k j := λ kj +1[P γkj,0,1grad h(x k j ) grad h(x k j+1 )]. y k j, exp 1 x k j +1 x x k j +1 y k j x k j +1 exp 1 x k j +1 x x k j

14 We have exp 1 x x k j +1 x k j +1 = d(x, x kj+1 ), also, from Theorem 6.2, there exists M > 0 such that y k j, exp 1 x x k j +1 x k j +1 M y k j x k j +1. Using this fact in the inequality (6.23) we obtain To conclude the proof we will show that f(x ) f(x k j+1 ) M y k j x k j +1. (6.24) Indeed, from the assumption A2.1 lim j yk j x k j +1 = 0. 0 y k j x k j +1 λ kj+1 Ld(x k j+1, x k j ). Now, Taking j + and using the boundedness of {λ k } and Proposition 6.1, d, we obtain lim j y k j x k j = 0, as wanted. Finally, taking j + in (6.24), use the continuity of f to get f(x ) f( x). Therefore, any limit point is an optimal solution of the problem (p). Theorem 6.6 Suppose that assumption A1 and A2.2 are satisfied. If λ k satisfies (6.16) then, any limit point of {x k } is an optimal solution of the problem (p). Proof. Analogously to the previous theorem, given x M and using (6.15), and (3.6) for x and y = x k we have where, 1 λ k y k, exp 1 x k, x x k 1 λ k (f(x k ) f(x)) (6.25) y k := λ k [P γk,0,1grad h(x k 1 ) grad h(x k )], and γ k is the geodesic curve joining γ k (0) = x k 1 to γ k (1) = x k. On the other hand, taking y = x k 1 and z = x k in (6.18) we have D h (x, x k 1 ) D h (x k, x k 1 ) D h (x, x k ) 1 λ k y k, exp 1 x k, x x k. In that inequality, use (6.25) and the boundedness of {λ k }, giving D h (x, x k 1 ) D h (x k, x k 1 ) D h (x, x k ) 1 λ(f(x k ) f(x)) (6.26) Now, Let x M be a limit point of {x k } then, there exists a subsequence {x k j } such that From (6.26) for x = x we have lim j xk j = x. D h (x, x k j 1 ) D h (x k j, x k j 1 ) D h (x, x k j ) 1 λ(f(x k j ) f(x )). Take j and apply Proposition 6.1, a and b, to obtain f(x ) f( x). Therefore, any limit point of {x k } is an optimal solution of (p). 14

15 Theorem 6.7 Under Assumptions A1 and A2 the sequence {x k } generated by the PBD algorithm, with λ k satisfying (6.16), converges to an optimal solution of (p). Proof. Analogous to the proof of Theorem Examples Examples 7.1 to 7.4 are Riemannian manifolds with zero sectional curvature. In example 7.5, it is negative. All those are complete simply connected Riemannian manifolds with nonpositive curvature, the known Hadamard manifolds. They have nice properties as uniqueness of geodesics between any two points, and the diffeomorphic property relating to the Euclidean space. Therefore, totally convex sets can be just denominated convex, and the convexity concept for functions is determined on the (unique) geodesic joining the points. We observe that we have no example of complete noncompact Riemannian manifold of positive sectional curvature (they have also the interesting property of diffeomorphism to IR n, see [22]). Aditionnaly, notice that if the sectional curvature of a Riemannian manifold K(x) δ, x M, for some positive δ, the manifold is compact, and (at least), homeomorphic to the sphere, see [22]. As quasi convex functions are constant on spheres, see [24], the typical sphere examples are out of order. Example 7.1 The Euclidean space is a Hadamard manifold with the metric G(x) = I. Its geodesic curves are the straight lines and the Bregman distances have the form Example 7.2 Let IR n with the metric n D h (x, y) = h(x) h(y) (x i y i ) h(y) y i=1 i G(x) = x 2 n 1 2x n x n 1 1 Thus (IR n, G(x)) is a Hadamard manifold isometric to (IR n, I) through the application φ : IR n IR n defined by Φ(x) = (x 1, x 2,..., x n 1, x 2 n 1 x n), see [6]. The unique geodesic curve, joining the points γ(0) = y and γ(1) = x is γ(t) = (γ 1,..., γ n ), where γ i (t) = y i +t(x i y i ), i = 1,...n 1 and γ n (t) = y n +t((x n y n ) 2(x n 1 y n 1 ) 2 )+2t 2 (x n 1 y n 1 ) 2. Then the Bregman distance is n D h (x, y) = h(x) h(y) (x i y i ) h(y) + 2 h(y) (x n y n ) y i y n i=1 Example 7.3 M = IR n ++ with the Dikin metric X 2 is a Hadamard manifold. Defining π : M IR n such that π(x) = ( ln x 1,..., ln x n ), it can be proved that π is an isometry. It is well known, see for example [20], that the unique geodesic curve joining the points γ(0) = y and γ(1) = x is ( ) γ(t) = x t 1y1 1 t,..., x t nyn 1 t, 15

16 with γ (t) = Then the Bregman distance is: ( ) x t 1y1 1 t (ln x 1 ln y 1 ),..., x t nyn 1 t (ln x n ln y n ). D h (x, y) = h(x) h(y) n i=1 y i ln(x i /y i ) h(y) y i. Example 7.4 Let M = (0, 1) n. We will consider three metrics. 1. (M, X 2 ((I ( X) 2 ) ) is a ( Hadamard manifold; it is isometric to IR n through the function π(x) = ln x1 1 x 1,..., ln xn 1 x n )). The unique geodesic curve, see Section 9, joining the points γ(0) = y and γ(1) = x is γ(t) = (γ 1,..., γ n ) such that γ i (t) = { ( ) ( )} ( )] [(1/2) 2 tanh xi yi yi ln ln t + (1/2) ln, 1 x i 1 y i 1 y i with Then, the Bregman distance is D h (x, y) = h(x) h(y) γ i(t) = ln (x i/(1 x i )) ln (y i /(1 y i )) 4 cosh ((1/2)t + (1/2) ln(y i /(1 y i ))). n i=1 (1 y i ) 2 { ( ) ( )} xi yi h(y) 4yi 2 ln ln. cosh2 (1/2) 1 x i 1 y i y i 2. (M, csc 4 (πx)) is a Hadamard manifold, isometric to IR n, through the function π(x) = 1 π (cot(πx 1),..., cot(πx n )). The unique geodesic curve, see Section 9, joining the points γ(0) = y and γ(1) = x is γ(t) = (γ 1,..., γ n ) such that with and the Bregman distance is γ i (t) = 1 π arg cot[(cot πx i cot πy i )t + cot(πy i )], γ i(t) = (1/π) (cot(πy i ) cot(πx i )) sin 2 (πγ i (t)), D h (x, y) = h(x) h(y) n i=1 1 π (cot(πy i) cot(πx i )) sin 2 (πy i ) h(y) y i. 3. Finally, we consider (M, csc 2 (πx)). It is a Hadamard manifold isometric to IR n, see [17]. The unique geodesic curve joining the points γ(0) = y and γ(1) = x is γ(t) = (γ 1,..., γ n ) such that γ i (t) = ψ 1 (ψ(y i ) + t(ψ(x i ) ψ(y i ))), where So, γ i(t) = (1/π) ln ψ(τ) := ln (csc(πτ) cot(πτ)). ( ) csc(πxi ) cot(πx i ) sin(πγ i (t)). csc(πy i ) cot(πy i ) Therefore, the Bregman distance is D h (x, y) = h(x) h(y) 1 n ( ) csc(πxi ) cot(πx i ) ln sin(πy i ) h(y). π csc(πy i=1 i ) cot(πy i ) y i 16

17 Example 7.5 M = S n ++, the set of the n n positive definite symmetric matrices, with the metric given by the Hessian of ln det(x), is a Hadamard manifold with nonpositive curvature. The geodesic curve joining the points γ(0) = Y and γ(1) = X, see [17], is given by with Then, the Bregman distance is γ(t) = X 1/2 (X 1/2 Y X 1/2 ) t X 1/2, γ (t) = X 1/2 ln(x 1/2 Y X 1/2 )(X 1/2 Y X 1/2 ) t X 1/2. D h (X, Y ) = h(x) h(y ) tr[ h(y )X 1/2 ln(x 1/2 Y X 1/2 )X 1/2 ]. 8 Conclusion and Future Works We generalize the PBD algorithm to solve optimization problems defined on Riemannian manifolds. When the PBD algorithm works with λ k satisfying lim k + λ k = 0, with λ k > 0, the sequence generated by this algorithm converges to an optimal solution of the problem without any assumption on the Bregman functions. On the other hand when λ k satisfies 0 < λ k < λ, we use a class of Bregman function that guarantees the convergence results. We are working to remove that assumption in a general framework of Variational Inequality Problems on Riemannian manifolds. 9 Appendix: Geodesic Equation for a Special Diagonal Metric In this Appendix we give a simplified equation to obtain geodesic curves in closed form for a special class of diagonal Riemannian metric defined in manifolds that are open subset of IR n. This approach recover known metrics that are associated to some interior point algorithms, for example, the Dikin metric and some metrics obtained by the Hessian of separated barrier functions. Those results are extracted from [18]. Let M = M 1 M 2... M n be a manifold in IR n where M i are open subsets of IR, i = 1,..., n. Given x M( and u, v T x M IR n ) we induce in M the metric u, v x = u T G(x)v, where G(x) = diag p 1 (x 1, ) 2 p 2 (x 2,..., ) 2 p n(x n) and p 2 i : M i IR ++ are differentiable functions. In this case we have g ij (x) = δ ij p i (x i )p j (x j ). Considering the Riemannian manifold (M, G(x)) we obtain the following results: 1. Christoffel s Symbols. We use the relation between metrics and Christoffel s symbols, given in (2.5). If k m then g mk = 0, and the expression is reduced to: We consider two cases. First case: i = j Γ m ij = 1 2 Γ m ii = 1 2 { x i g jm + { x i g im + 17 x j g mi x i g mi } g ij g mm. x m } g ii g mm. x m

18 If m = i then otherwise, Second case: i j : If m = i then m j and: If m = j then m i and: If m i and m j then: In both cases we have Γ m ij = 1 2 Γ m ii = 1 p i (x i ), p i (x i ) x i Γ m ii = 0. { g im + } g mi g mm. x i x j Γ i ij = 0. Γ j ij = 0. Γ m ij = 0. Γ m ij = 1 p i (x i ) δ im δ ij. (9.27) p i (x i ) x i 2. Geodesic Equation. Substituting the Christoffel s symbols (9.27) in the equation (2.4) we have that the unique geodesic curve γ starting from x = (x 1, x 2,..., x n ) M, with direction v = (v 1, v 2,..., v n ) T p (M) IR n is obtained by solving the following equation: d 2 γ i dt 2 1 p i (γ i ) p i (γ i ) γ i ( ) 2 dγi = 0, i = 1,..., n (9.28) dt with initial conditions: γ i (0) = x i, i = 1,..., n, γ i (0) = v i, i = 1,..., n. We can see that the above equation has the following equivalent form: d dt ( γ (t) p i (γ i (t)) ) = 0. Then, the geodesic equation (9.28) can be solved through: 1 p i (γ i ) dγ i = a i t + b i i = 1,..., n (9.29) where a i and b i are real constants such that γ i (0) = x i and γ i (0) = v i. With the equation (9.29) we can reproduce some known geodesic curves and obtain new geodesic curves. Next, we present some examples: (a) Consider the Riemannian manifold (IR n ++, X 2 ) with p i (x i ) = x i. Using (9.29) the geodesic curves γ(t) = (γ 1 (t), γ 2 (t),..., γ n (t)) such that γ(0) = x and γ (0) = v are: ( ) vi γ i (t) = x i exp t, i = 1, 2..., n. x i 18

19 (b) Consider the manifold ((0, 1) n, csc 4 (πx)), where p i (x i ) = sin 2 (πx i ). The geodesic curve γ(t) = (γ 1 (t), γ 2 (t),..., γ n (t)) starting in γ(0) = x on the direction γ (0) = v is: γ i (t) = 1 ) ( π π arctan csc 2 (πx i )v i t + cot(πx i ), i = 1, 2..., n. The geodesic curve is well defined for all t IR. Therefore this manifold is complete with null curvature. The Riemannian distance from x = γ(0) to y = γ(t 0 ), t 0 > 0, is given by: d(x, y) = t0 0 { n } 1 γ (t) dt = [cot(πy i ) cot(πx i )] 2 2. i=1 (c) Consider ((0, 1) n, X 2 (I X) 2 ) with p i (x i ) = x i (1 x i ). The geodesic curve γ(t) = (γ 1 (t), γ 2 (t),..., γ n (t)) defined in this manifold, such that γ(0) = x and γ (0) = v is: γ i (t) = 1 { ( 1 v i 1 + tanh 2 2 x i (1 x i ) t + 1 )} 2 ln x i i = 1, 2..., n. 1 x i where tanh(z) = (e z e z )/(e z + e z ) is the hyperbolic tangent function. The geodesic curve is well defined for all t IR. Therefore, this manifold is complete with null curvature. The Riemannian distance from x = γ(0) to y = γ(t 0 ), t 0 > 0, is given by: d(x, y) = t0 0 x (t) dt = { n i=1 [ ( ) ( )] } yi xi ln ln 1 y i 1 x i The geodesic curves and Riemannian distances in (b) and (c) are new to our knowledge. Therefore, new explicit geodesics, metric variables and proximal point algorithms can be defined to solve non convex unconstrained optimization problems on (0, 1) n, whose convergence results are known for general metrics (see [7, 11, 12, 20]). References [1] Burachik, R. S. and Iusem, A. (1998), Generalized Proximal Point algorithm for the Variational Inequality Problem in Hilbert Space, SIAM, Journal of Optimization, 8, [2] Censor, Y. and Zenios, A. (1992), Proximal minimization algorithms with D-functions, Journal of Optimization Theory and Applications, 73, 3, [3] Censor, Y. and Lent, A. (1981), An Iterative Row-Action Method for Interval Convex Programming, Journal of Optimization Theory and applications, 34, 3, [4] Chen, G. and Teboulle, M. (1993), Convergence Analysis of the Proximal-Like Minimization Algorithm Using Bregman Functions, SIAM Journal of Optimization, 3, [5] Cunha, G. F. M., Pinto, A. M., Oliveira, P. R. and da Cruz Neto, J. X. (2005), Generalization of the primal and dual affine scaling algorithms, Technical Report ES 675/05, PESC/COPPE, Federal University of Rio de Janeiro. 19

20 [6] da Cruz Neto, J. X., Ferreira, O. P., Lucambio Perez, L. and Nmeth, S. Z. Convex-and Monotone-Transformable Mathematical Programming and a proximal-like Point Method, to apper in Journal Of Global Optimization. [7] da Cruz Neto, J. X., Lima, L. L. and Oliveira, P. R. (1998), Geodesic Algorithms in Riemannian Geometry, Balkan Journal of Geometry and its Applications, 3, 2, [8] Gabay, D. Minimizing a Differentiable Function over a Differential Manifold (1982) Journal of Optimization Theory and Applications, 37, 2, [9] den Hertog, D., Roos, C. and Terlaky, T. (1991), Inverse Barrier Methods for Linear Programming, Report 91-27, Faculty of Technical Mathematics and Informatics, Delf University of Technology, The Netherlands. [10] do Carmo, M. P. Riemannian Geometry (1992), Bikhausen, Boston. [11] Ferreira, O. P. and Oliveira, P. R. Sub gradient Algorithm on Riemannian Manifolds (1998), Journal of Optimization Theory and Application, 97, 1, [12] Ferreira, O. P. and Oliveira, P. R. (2002), Proximal Point Algorithm on Riemannian Manifolds, Optimization, 51, 2, [13] Garciga, O. and Iusem, A. (2002), Proximal Methods with Penalization Effects in Banach Spaces, IMPA, Technical Report, Serie A-119. [14] Iusem, A. (1995), Métodos de Ponto Proximal em Otimização, Colóquio Brasileiro de Matemática, IMPA, Brazil. [15] Kiwiel, K. C. (1997), Proximal Minimization Methods with Generalized Bregman Functions, SIAM Journal of Control and Optimization, 35, [16] Luenberger, D. G. The Gradient Projection Method Along Geodesics (1972), Management Science, 18, 11, [17] Nesterov, Y. E. and Todd, M. J. (2002), On the Riemannian Geometry Defined by Self- Concordant Barrier and Interior-Point Methods, Foundations of Computational Mathematics, 2, [18] Papa Quiroz, E. A. and Oliveira, P. R. (2004), New Results on Linear Optimization Through Diagonal Metrics and Riemannian Geometry Tools, Technical Report ES-645/04, PESC/COPPE, Federal University of Rio de Janeiro. [19] Pereira, G. J. and Oliveira, P. R. (2005), A new Class of Proximal Algorithms for the Nonlinear Complementary Problems, In: Qi, Liqun; Teo Koklay; Yang, Xiaoqi, Optimization and Control with Applications 1 ed, 69, [20] Rapcsák, T. (1997), Smooth Nonlinear Optimization, Kluwer Academic Publishers. [21] Saigal, R. (1993), The primal power affine scaling method, Tech. Report 93-21, Dep. Ind. and Oper. Eng., University of Michigan. [22] Sakai, T. (1996), Riemannian Geometry, American Mathematical Society, Providence, RI. 20

21 [23] Smith, S. T. (1994), Optimization Techniques on Riemannian Manifolds, Fields Institute Communications, AMS, Providence, RI, 3, [24] Udriste, C. (1994), Convex Function and Optimization Methos on Riemannian Manifolds, Kluwer Academic Publishers. 21

Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds

Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds J. Math. Anal. Appl. 341 (2008) 467 477 www.elsevier.com/locate/jmaa Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds E.A. Papa Quiroz, E.M. Quispe,

More information

Monotone Point-to-Set Vector Fields

Monotone Point-to-Set Vector Fields Monotone Point-to-Set Vector Fields J.X. da Cruz Neto, O.P.Ferreira and L.R. Lucambio Pérez Dedicated to Prof.Dr. Constantin UDRIŞTE on the occasion of his sixtieth birthday Abstract We introduce the concept

More information

2 Preliminaries. is an application P γ,t0,t 1. : T γt0 M T γt1 M given by P γ,t0,t 1

2 Preliminaries. is an application P γ,t0,t 1. : T γt0 M T γt1 M given by P γ,t0,t 1 804 JX DA C NETO, P SANTOS AND S SOUZA where 2 denotes the Euclidean norm We extend the definition of sufficient descent direction for optimization on Riemannian manifolds and we relax that definition

More information

Steepest descent method on a Riemannian manifold: the convex case

Steepest descent method on a Riemannian manifold: the convex case Steepest descent method on a Riemannian manifold: the convex case Julien Munier Abstract. In this paper we are interested in the asymptotic behavior of the trajectories of the famous steepest descent evolution

More information

Generalized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds

Generalized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds Generalized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds A. Barani 1 2 3 4 5 6 7 Abstract. In this paper we establish connections between some concepts of generalized

More information

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds da Cruz Neto, J. X., Ferreira, O. P., Oliveira, P. R. and Silva, R. C. M. February

More information

Chapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves

Chapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves Chapter 3 Riemannian Manifolds - I The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves embedded in Riemannian manifolds. A Riemannian manifold is an abstraction

More information

Generalized vector equilibrium problems on Hadamard manifolds

Generalized vector equilibrium problems on Hadamard manifolds Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1402 1409 Research Article Generalized vector equilibrium problems on Hadamard manifolds Shreyasi Jana a, Chandal Nahak a, Cristiana

More information

Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds

Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds N. Baygorrea, E.A. Papa Quiroz and N. Maculan Federal University of Rio de Janeiro COPPE-PESC-UFRJ PO Box 68511, Rio de

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

A splitting minimization method on geodesic spaces

A splitting minimization method on geodesic spaces A splitting minimization method on geodesic spaces J.X. Cruz Neto DM, Universidade Federal do Piauí, Teresina, PI 64049-500, BR B.P. Lima DM, Universidade Federal do Piauí, Teresina, PI 64049-500, BR P.A.

More information

Hessian Riemannian Gradient Flows in Convex Programming

Hessian Riemannian Gradient Flows in Convex Programming Hessian Riemannian Gradient Flows in Convex Programming Felipe Alvarez, Jérôme Bolte, Olivier Brahic INTERNATIONAL CONFERENCE ON MODELING AND OPTIMIZATION MODOPT 2004 Universidad de La Frontera, Temuco,

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

HOMEWORK 2 - RIEMANNIAN GEOMETRY. 1. Problems In what follows (M, g) will always denote a Riemannian manifold with a Levi-Civita connection.

HOMEWORK 2 - RIEMANNIAN GEOMETRY. 1. Problems In what follows (M, g) will always denote a Riemannian manifold with a Levi-Civita connection. HOMEWORK 2 - RIEMANNIAN GEOMETRY ANDRÉ NEVES 1. Problems In what follows (M, g will always denote a Riemannian manifold with a Levi-Civita connection. 1 Let X, Y, Z be vector fields on M so that X(p Z(p

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Steepest descent method on a Riemannian manifold: the convex case

Steepest descent method on a Riemannian manifold: the convex case Steepest descent method on a Riemannian manifold: the convex case Julien Munier To cite this version: Julien Munier. Steepest descent method on a Riemannian manifold: the convex case. 2006.

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

TECHNICAL REPORT Nonmonotonicity and Quasiconvexity on Equilibrium Problems

TECHNICAL REPORT Nonmonotonicity and Quasiconvexity on Equilibrium Problems TECHNICAL REPORT Nonmonotonicity and Quasiconvexity on Equilibrium Problems Lennin Mallma-Ramirez Systems Engineering and Computer Science Program (PESC) Graduate School of Enginnering (COPPE) Federal

More information

DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17

DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17 DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17 6. Geodesics A parametrized line γ : [a, b] R n in R n is straight (and the parametrization is uniform) if the vector γ (t) does not depend on t. Thus,

More information

Unconstrained Steepest Descent Method for Multicriteria Optimization on Riemannian Manifolds

Unconstrained Steepest Descent Method for Multicriteria Optimization on Riemannian Manifolds J Optim Theory Appl (2012) 154:88 107 DOI 10.1007/s10957-011-9984-2 Unconstrained Steepest Descent Method for Multicriteria Optimization on Riemannian Manifolds G.C. Bento O.P. Ferreira P.R. Oliveira Received:

More information

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = CALCULUS ON MANIFOLDS 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = a M T am, called the tangent bundle, is itself a smooth manifold, dim T M = 2n. Example 1.

More information

Local Convergence Analysis of Proximal Point Method for a Special Class of Nonconvex Functions on Hadamard Manifolds

Local Convergence Analysis of Proximal Point Method for a Special Class of Nonconvex Functions on Hadamard Manifolds Local Convergence Analysis of Proximal Point Method for a Special Class of Nonconvex Functions on Hadamard Manifolds Glaydston de Carvalho Bento Universidade Federal de Goiás Instituto de Matemática e

More information

Invariant monotone vector fields on Riemannian manifolds

Invariant monotone vector fields on Riemannian manifolds Nonlinear Analsis 70 (2009) 850 86 www.elsevier.com/locate/na Invariant monotone vector fields on Riemannian manifolds A. Barani, M.R. Pouraevali Department of Mathematics, Universit of Isfahan, P.O.Box

More information

H-convex Riemannian submanifolds

H-convex Riemannian submanifolds H-convex Riemannian submanifolds Constantin Udrişte and Teodor Oprea Abstract. Having in mind the well known model of Euclidean convex hypersurfaces [4], [5] and the ideas in [1], many authors defined

More information

arxiv: v1 [math.na] 29 Oct 2010

arxiv: v1 [math.na] 29 Oct 2010 arxiv:1011.0010v1 [math.na] 29 Oct 2010 Unconstrained steepest descent method for multicriteria optimization on Riemmanian manifolds G. C. Bento O. P. Ferreira P. R. Oliveira October 29, 2010 Abstract

More information

Characterization of lower semicontinuous convex functions on Riemannian manifolds

Characterization of lower semicontinuous convex functions on Riemannian manifolds Wegelerstraße 6 53115 Bonn Germany phone +49 228 73-3427 fax +49 228 73-7527 www.ins.uni-bonn.de S. Hosseini Characterization of lower semicontinuous convex functions on Riemannian manifolds INS Preprint

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

About the Convexity of a Special Function on Hadamard Manifolds

About the Convexity of a Special Function on Hadamard Manifolds Noname manuscript No. (will be inserted by the editor) About the Convexity of a Special Function on Hadamard Manifolds Cruz Neto J. X. Melo I. D. Sousa P. A. Silva J. P. the date of receipt and acceptance

More information

Variational inequalities for set-valued vector fields on Riemannian manifolds

Variational inequalities for set-valued vector fields on Riemannian manifolds Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /

More information

4.7 The Levi-Civita connection and parallel transport

4.7 The Levi-Civita connection and parallel transport Classnotes: Geometry & Control of Dynamical Systems, M. Kawski. April 21, 2009 138 4.7 The Levi-Civita connection and parallel transport In the earlier investigation, characterizing the shortest curves

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

arxiv: v1 [math.oc] 2 Apr 2015

arxiv: v1 [math.oc] 2 Apr 2015 arxiv:1504.00424v1 [math.oc] 2 Apr 2015 Proximal point method for a special class of nonconvex multiobjective optimization problem G. C. Bento O. P. Ferreira V. L. Sousa Junior March 31, 2015 Abstract

More information

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES Regina S. Burachik* Departamento de Matemática Pontíficia Universidade Católica de Rio de Janeiro Rua Marques de São Vicente,

More information

Spacetime Geometry. Beijing International Mathematics Research Center 2007 Summer School

Spacetime Geometry. Beijing International Mathematics Research Center 2007 Summer School Spacetime Geometry Beijing International Mathematics Research Center 2007 Summer School Gregory J. Galloway Department of Mathematics University of Miami October 29, 2007 1 Contents 1 Pseudo-Riemannian

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

UNIVERSITY OF DUBLIN

UNIVERSITY OF DUBLIN UNIVERSITY OF DUBLIN TRINITY COLLEGE JS & SS Mathematics SS Theoretical Physics SS TSM Mathematics Faculty of Engineering, Mathematics and Science school of mathematics Trinity Term 2015 Module MA3429

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

5.2 The Levi-Civita Connection on Surfaces. 1 Parallel transport of vector fields on a surface M

5.2 The Levi-Civita Connection on Surfaces. 1 Parallel transport of vector fields on a surface M 5.2 The Levi-Civita Connection on Surfaces In this section, we define the parallel transport of vector fields on a surface M, and then we introduce the concept of the Levi-Civita connection, which is also

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Curvature homogeneity of type (1, 3) in pseudo-riemannian manifolds

Curvature homogeneity of type (1, 3) in pseudo-riemannian manifolds Curvature homogeneity of type (1, 3) in pseudo-riemannian manifolds Cullen McDonald August, 013 Abstract We construct two new families of pseudo-riemannian manifolds which are curvature homegeneous of

More information

LECTURE 10: THE PARALLEL TRANSPORT

LECTURE 10: THE PARALLEL TRANSPORT LECTURE 10: THE PARALLEL TRANSPORT 1. The parallel transport We shall start with the geometric meaning of linear connections. Suppose M is a smooth manifold with a linear connection. Let γ : [a, b] M be

More information

Math 6455 Nov 1, Differential Geometry I Fall 2006, Georgia Tech

Math 6455 Nov 1, Differential Geometry I Fall 2006, Georgia Tech Math 6455 Nov 1, 26 1 Differential Geometry I Fall 26, Georgia Tech Lecture Notes 14 Connections Suppose that we have a vector field X on a Riemannian manifold M. How can we measure how much X is changing

More information

Terse Notes on Riemannian Geometry

Terse Notes on Riemannian Geometry Terse Notes on Riemannian Geometry Tom Fletcher January 26, 2010 These notes cover the basics of Riemannian geometry, Lie groups, and symmetric spaces. This is just a listing of the basic definitions and

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Two-Step Methods for Variational Inequalities on Hadamard Manifolds

Two-Step Methods for Variational Inequalities on Hadamard Manifolds Appl. Math. Inf. Sci. 9, No. 4, 1863-1867 (2015) 1863 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/090424 Two-Step Methods for Variational Inequalities

More information

Hopf-Rinow and Hadamard Theorems

Hopf-Rinow and Hadamard Theorems Summersemester 2015 University of Heidelberg Riemannian geometry seminar Hopf-Rinow and Hadamard Theorems by Sven Grützmacher supervised by: Dr. Gye-Seon Lee Prof. Dr. Anna Wienhard Contents Introduction..........................................

More information

arxiv:math/ v1 [math.dg] 24 May 2005

arxiv:math/ v1 [math.dg] 24 May 2005 arxiv:math/0505496v1 [math.dg] 24 May 2005 INF-CONVOLUTION AND REGULARIZATION OF CONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OF NONPOSITIVE CURVATURE DANIEL AZAGRA AND JUAN FERRERA Abstract. We show how an

More information

On the Convergence of the Entropy-Exponential Penalty Trajectories and Generalized Proximal Point Methods in Semidefinite Optimization

On the Convergence of the Entropy-Exponential Penalty Trajectories and Generalized Proximal Point Methods in Semidefinite Optimization On the Convergence of the Entropy-Exponential Penalty Trajectories and Generalized Proximal Point Methods in Semidefinite Optimization Ferreira, O. P. Oliveira, P. R. Silva, R. C. M. June 02, 2007 Abstract

More information

Local convexity on smooth manifolds 1,2,3. T. Rapcsák 4

Local convexity on smooth manifolds 1,2,3. T. Rapcsák 4 Local convexity on smooth manifolds 1,2,3 T. Rapcsák 4 1 The paper is dedicated to the memory of Guido Stampacchia. 2 The author wishes to thank F. Giannessi for the advice. 3 Research was supported in

More information

DIFFERENTIAL GEOMETRY. LECTURE 12-13,

DIFFERENTIAL GEOMETRY. LECTURE 12-13, DIFFERENTIAL GEOMETRY. LECTURE 12-13, 3.07.08 5. Riemannian metrics. Examples. Connections 5.1. Length of a curve. Let γ : [a, b] R n be a parametried curve. Its length can be calculated as the limit of

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

FIXED POINT METHODS IN NONLINEAR ANALYSIS

FIXED POINT METHODS IN NONLINEAR ANALYSIS FIXED POINT METHODS IN NONLINEAR ANALYSIS ZACHARY SMITH Abstract. In this paper we present a selection of fixed point theorems with applications in nonlinear analysis. We begin with the Banach fixed point

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

A brief introduction to Semi-Riemannian geometry and general relativity. Hans Ringström

A brief introduction to Semi-Riemannian geometry and general relativity. Hans Ringström A brief introduction to Semi-Riemannian geometry and general relativity Hans Ringström May 5, 2015 2 Contents 1 Scalar product spaces 1 1.1 Scalar products...................................... 1 1.2 Orthonormal

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

Choice of Riemannian Metrics for Rigid Body Kinematics

Choice of Riemannian Metrics for Rigid Body Kinematics Choice of Riemannian Metrics for Rigid Body Kinematics Miloš Žefran1, Vijay Kumar 1 and Christopher Croke 2 1 General Robotics and Active Sensory Perception (GRASP) Laboratory 2 Department of Mathematics

More information

A proximal point algorithm for DC functions on Hadamard manifolds

A proximal point algorithm for DC functions on Hadamard manifolds Noname manuscript No. (will be inserted by the editor) A proimal point algorithm for DC functions on Hadamard manifolds J.C.O. Souza P.R. Oliveira Received: date / Accepted: date Abstract An etension of

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

June 21, Peking University. Dual Connections. Zhengchao Wan. Overview. Duality of connections. Divergence: general contrast functions

June 21, Peking University. Dual Connections. Zhengchao Wan. Overview. Duality of connections. Divergence: general contrast functions Dual Peking University June 21, 2016 Divergences: Riemannian connection Let M be a manifold on which there is given a Riemannian metric g =,. A connection satisfying Z X, Y = Z X, Y + X, Z Y (1) for all

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Monotone and Accretive Vector Fields on Riemannian Manifolds

Monotone and Accretive Vector Fields on Riemannian Manifolds Monotone and Accretive Vector Fields on Riemannian Manifolds J.H. Wang, 1 G. López, 2 V. Martín-Márquez 3 and C. Li 4 Communicated by J.C. Yao 1 Corresponding author, Department of Applied Mathematics,

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Loos Symmetric Cones. Jimmie Lawson Louisiana State University. July, 2018

Loos Symmetric Cones. Jimmie Lawson Louisiana State University. July, 2018 Louisiana State University July, 2018 Dedication I would like to dedicate this talk to Joachim Hilgert, whose 60th birthday we celebrate at this conference and with whom I researched and wrote a big blue

More information

ON TWO-DIMENSIONAL MINIMAL FILLINGS. S. V. Ivanov

ON TWO-DIMENSIONAL MINIMAL FILLINGS. S. V. Ivanov ON TWO-DIMENSIONAL MINIMAL FILLINGS S. V. Ivanov Abstract. We consider Riemannian metrics in the two-dimensional disk D (with boundary). We prove that, if a metric g 0 is such that every two interior points

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

1 First and second variational formulas for area

1 First and second variational formulas for area 1 First and second variational formulas for area In this chapter, we will derive the first and second variational formulas for the area of a submanifold. This will be useful in our later discussion on

More information

William P. Thurston. The Geometry and Topology of Three-Manifolds

William P. Thurston. The Geometry and Topology of Three-Manifolds William P. Thurston The Geometry and Topology of Three-Manifolds Electronic version 1.1 - March 00 http://www.msri.org/publications/books/gt3m/ This is an electronic edition of the 1980 notes distributed

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Strictly convex functions on complete Finsler manifolds

Strictly convex functions on complete Finsler manifolds Proc. Indian Acad. Sci. (Math. Sci.) Vol. 126, No. 4, November 2016, pp. 623 627. DOI 10.1007/s12044-016-0307-2 Strictly convex functions on complete Finsler manifolds YOE ITOKAWA 1, KATSUHIRO SHIOHAMA

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Hadamard s Theorem. Rich Schwartz. September 10, The purpose of these notes is to prove the following theorem.

Hadamard s Theorem. Rich Schwartz. September 10, The purpose of these notes is to prove the following theorem. Hadamard s Theorem Rich Schwartz September 10, 013 1 The Result and Proof Outline The purpose of these notes is to prove the following theorem. Theorem 1.1 (Hadamard) Let M 1 and M be simply connected,

More information

Homework for Math , Spring 2012

Homework for Math , Spring 2012 Homework for Math 6170 1, Spring 2012 Andres Treibergs, Instructor April 24, 2012 Our main text this semester is Isaac Chavel, Riemannian Geometry: A Modern Introduction, 2nd. ed., Cambridge, 2006. Please

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS RALPH HOWARD DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH CAROLINA COLUMBIA, S.C. 29208, USA HOWARD@MATH.SC.EDU Abstract. This is an edited version of a

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

1. Geometry of the unit tangent bundle

1. Geometry of the unit tangent bundle 1 1. Geometry of the unit tangent bundle The main reference for this section is [8]. In the following, we consider (M, g) an n-dimensional smooth manifold endowed with a Riemannian metric g. 1.1. Notations

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

PREISSMAN S THEOREM ALEXANDER GRANT

PREISSMAN S THEOREM ALEXANDER GRANT PREISSMAN S THEOREM ALEXANDER GRANT Abstract. This paper presents a proof of Preissman s Theorem, a theorem from the study of Riemannian Geometry that imposes restrictions on the fundamental group of compact,

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

PICARD S THEOREM STEFAN FRIEDL

PICARD S THEOREM STEFAN FRIEDL PICARD S THEOREM STEFAN FRIEDL Abstract. We give a summary for the proof of Picard s Theorem. The proof is for the most part an excerpt of [F]. 1. Introduction Definition. Let U C be an open subset. A

More information

Convergence of inexact descent methods for nonconvex optimization on Riemannian manifolds.

Convergence of inexact descent methods for nonconvex optimization on Riemannian manifolds. arxiv:1103.4828v1 [math.na] 24 Mar 2011 Convergence of inexact descent methods for nonconvex optimization on Riemannian manifolds. G. C. Bento J. X. da Cruz Neto P. R. Oliveira March 12, 2018 Abstract

More information

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem Chapter 4 Inverse Function Theorem d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d dd d d d d This chapter

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information