A semi-algebraic look at first-order methods

Size: px
Start display at page:

Download "A semi-algebraic look at first-order methods"

Transcription

1 splitting A semi-algebraic look at first-order Université de Toulouse / TSE Nesterov s 60th birthday, Les Houches, 2016

2 in large-scale first-order optimization splitting Start with a reasonable FOM (some splitting method, gradient projection method, alternating projection method, Lagrangian method...): Criticality / necessary optimality conditions (discrete Lasalle s invariance principle called Zangwill s theorem )? But what about convergence guarantees for the iterates?? Decrease rates for the iterates/value? Answer: well-designed notions of piecewise smoothness? Does not work!

3 Smooth counter-examples: Palis-De Melo, an extension Absil et al. splitting There exist f : R 2 R C coercive nonnegative with f 1 ({0}) = argmin f =unit disk a bounded gradient sequence with constant step s > 0 (as small as we want): such that but with this awful property x k+1 = x k s f (x k ) f (x k ) min f = 0, f (x k ) 0 The set of limit-points of x k is the unit circle

4 Smooth counter-examples: Palis-De Melo, an extension Absil et al. Easier to underst with ẋ(t) = f (x(t)) Idea: Spiraling bump yields spiraling curves splitting A function that yields similar : ( f (r, θ) = exp(1 r 2 r 4 ) ( ) 1 r 4 + (1 r 2 ) 4 sin θ 1 ) 1 r 2

5 Source of counter-examples: oscillations splitting Oscillations in Optimization: Oscillations = gradient direction=very bad predictors arbitrarily bad rates/complexity for any fixed dimension... Even worst behaviors for nonsmooth Same awful behaviors for more complex : e.g. Forward-Backward

6 Solutions to the oscillation issue? Alternative framework? Solutions to these bad behaviors? splitting Topological approach: avoid wild oscillations. Use semi-algebraic (or definable) Too vague!! Ad hoc approach: study deformations of a class of nonsmooth sharp function Too specific??

7 Semi-algebraic objects splitting Defined by finitely many polynomials Easy to recognize (Tarski-Seidenberg, quantifier elimination) Very stable (image/pre-image, derivation, composition, subdifferentiation...) Oscillations are controlled: monotonicity lemma/ finiteness of the number of connected components. Take A R p R n A x = {y R p : (x, y) A}. There exists N N such that cc(a x ) N, x R p

8 Concrete examples splitting Polynomials 1 2 Ax b 2, (A, B) 1 2 AB M 2 max or min of polynomials rank, l p norms (p rational or p = /infinity norm, p = 0/zero norm) stard cones: R n +, Lorenz cone, SDP... What about non semi-algebraic problems? analytic, e.g., log det l p norm with p arbitrary... a similar theory holds: van den Dries / Shiota: global subanalyticity, Dirichlet series, log-exp structure...

9 Fréchet subdifferential f : R n R {+ } lsc proper. First-order formulation: Let x in dom f (Fréchet) subdifferential : p ˆ f (x) iff splitting f (u) f (x) + p u x + o( u x ), u R n. u f (u) u f (x) + p u x

10 Subdifferential splitting Definition (Mordhukovich 76) It is denoted by f defined through: x f (x) iff (x k, x k ) (x, x ) such that f (x k ) f (x) f (u) f (x k ) + x k u x k + o( u x k ). Example: f (x) = x 1, f (0) = B = [ 1, 1] n Set f (x) = min{ x : x f (x)} Properties (Critical point) Fermat s rule if f has a minimizer at x then f (x) 0. Conversely when 0 f (x), the point x is called critical.

11 An elementary remedy to gradient oscillation : ness A function f : R n R {+ } is called sharp on the slice [r 0 < f < r 1 ] := {x R n : r 0 < f (x) < r 1 }, if there exists c > 0 f (x) c > 0, x [r 0 < f < r 1 ] Basic example f (x) = x splitting Many works since 78, Rockafellar, Polyak, Ferris, Burke many many others...

12 ness: example Nonconvex illustration with a continuum of minimizers 5 splitting

13 Finite convergence splitting Why is sharpness a remedy? Slopes towards the minimizers overcome frankly other parasite slopes Gradient curves reach the valley within a finite time. Easy to see the phenomenon on proximal descent Prox descent = formal setting for implicit gradient x + = x step. f (x + ) Prox operator: prox s f x = argmin{f (u) + 1 2s u x 2 : u R n } (Moreau) x + = prox step f (x) When f is sharp, convergence occurs in finite time!!

14 Proof splitting Assume f is sharp, then if x k+1 is non critical f (x k+1 ) f (x k ) δ. Write (assume step is constant) f (x k+1 ) f (x k ) x k+1 x k 2 x k+1 x k step. f (x k+1 ) thus x k+1 x k c.step f (x k+1 ) f (x k ) (c.step) 2 Set δ = (c.step) 2.

15 Measuring the default of sharpness splitting 0 is a critical value of f (true up to a translation). Set [0 < f < r 0 ] := {x R n : 0 < f (x) < r 0 } assume that there are no critical points in [0 < f < r 0 ]. f has the KL property on [0 < f < r 0 ] if there exists a function g : [0 < f < r 0 ] R {+ } whose sublevel sets are those of f, ie [f r] r (0,r0 ), such that g(x) 1 for all x in [0 < f < r 0 ]. EX (Concentric ellipsoids ) f (x) = 1 2 Ax, x g(x) = f

16 Formal KL property splitting Formally : Desingularizing on (0, r 0 ) : ϕ C([0, r 0 ), R + ), concave, ϕ C 1 (0, r 0 ), ϕ > 0 ϕ(0) = 0. Definition f has the KL property on [0 < f < r 0 ] if there exists a desingularizing function ϕ such that (ϕ f )(x) 1, x [0 < f < r 0 ]. Local version : replace [0 < f < r 0 ] by the intersection of [0 < f < r 0 ] with a closed ball.

17 Some in the smooth world splitting Theorem [ Lojasiewicz 1963/Kurdyka 98] If f is analytic/ o-minimal, it is KL with ϕ(s) = Ks 1 θ. Gradient dominated of Polyak 63, for f convex f (x) 2 K(f (x) min f ) KL with 1 K s. Many ideas were present... Y. Ol Khoovsky (1972) analyzed the gradient method (Absil, Mahony, Andrews 05 similar independently) In 2006 Polyak & Nesterov 06, introduced f (x) p K(f (x) min f ) for complexity purposes of second/third order. This is exactly Lojasiewicz inequality when p > 1.

18 Going beyond smoothness/analyticity? splitting Theorem (B-Daniilidis-Lewis 2006) Nonsmooth semialgebraic/subanalytic case 2006: Take f : R n R {+ } lower semicontinuous semialgebraic then f has KL property around each point. Many many satisfy KL inequality see: B-Daniilidis-Lewis-Shiota 2007 B-Daniilidis-Ley-Mazet 2010 In Optimization see also but also Attouch, B., Redont 2010, B-Sabach-Teboulle

19 Brief technical comments on KL splitting The ingredients A nonsmooth Sard s theorem: finiteness of critical values Amenability to sharpness is equivalent to the fact that there exists a talweg ( path in the valley ) of finite length a result from B., Daniilidis, Ley, Mazet. In the semi-algebraic world, desingularization are of the form ϕ(s) = cs 1 θ this is Puiseux Lemma. How all this

20 Descent at large (?P. Tseng?) splitting Let f : R n R {+ } be a proper lower semicontinuous function; a, b > 0. Let x k be a sequence in dom f such that Sufficient decrease condition f (x k+1 ) + a x k+1 x k 2 f (x k ); k 0 Relative error condition For each k N, there exists w k+1 f (x k+1 ) such that w k+1 b x k+1 x k ;

21 theorem splitting Theorem (Attouch-B-Svaiter 2012 / B. Sabach-Teboulle 13) Let f be a KL function x k a descent subgradient sequence for f. If x k is bounded then it converges to a critical point of f. Corollary Let f be a coercive semi-algebraic function x k a descent sequence for f. Then x k converges to a critical point of f.

22 Rate of convergence splitting Assume that ϕ(s) = cs 1 θ with c > 0, θ [0, 1). Theorem (i) If θ (0, 1 2 ] then there exist d > 0 Q [0, 1) such that x k x d Q k, (ii) If θ ( 1 2, 1) then there exists c 1, c 2 > 0 such that x k x c 1 k 1 θ 2θ 1, f (xk ) f (x ) c 2 k 1 2θ 1 Conv. rate θ 1 θ 2θ θ flatness degree = o ( 1 k )

23 Forward-backward splitting algorithm splitting From now on all objects are assumed to be SA Minimizing nonsmooth+smooth structure: f = g + h with { h C 1 h L Lipschitz continuous g lsc bounded from below + prox is easily computable Forward-backward splitting (Lions-Mercier 79): Let γ k be such that 0 < γ < γ k < γ < 1 L x k+1 prox γk g (x k γ k h(x k )). Theorem If the problem is coercive x k is a converging sequence

24 Gradient projection splitting C a semi-algebraic set, f a semi-algebraic function. ) x k+1 P C (x k γ k f (x k ) Theorem If the sequence x k is bounded it converges to a critical point of the problem min C f Example: Inverse problems with sparsity constraints: { } 1 min 2 Ax b 2 : x C C=sparsity constraint/rank constraint/simple constraints

25 von Neumann alternating projections? splitting Assumptions: C semi-algebraic closed D semi-algebraic convex. Example: D=Hankel matrices C =matrices of rank lower than r. x k+1 P C (y k ), y k+1 = P D (x k ). Careful oscillations are possible... A circle for C its center D = {center} A solution under-relax: Theorem Bounded sequences of the form ) x k+1 P C (ɛx k + (1 ɛ)p D (x k ) are converging. More subtle by Noll-Rondepierre in the same category based on nonsmooth KL

26 Averaged projections with underrelaxation splitting x k+1 (1 θ) x k + θ ( 1 p ) p P Fi (x k ), θ ]0, 1[ i=1 Theorem (Averaged projection method) F 1,..., F p be closed semi-algebraic which satisfy p i=1 F i. If x 0 is sufficiently close to p i=1 F i, then x k converges to a feasible point x, i.e. such that x p F i. i=1

27 Alternating versions of prox algorithm splitting F : R m 1... R mp R {+ } lsc semi-algebraic Structure of F : F (x 1,..., x p ) = p f i (x i ) + Q(x 1,..., x p ) i=1 f i are proper lsc Q is C 1. Gauss-Seidel method/ Prox (Auslender 1993) x k+1 1 argmin u R m 1 x k+1 p F (u, x k 2,..., x k p ) + 1 2µ 1 u x 1 k 2... argmin u R mp F (x k+1 1,..., x k+1 p 1, u) + 1 2µ p u x p k 2

28 Alternating versions of prox algorithm splitting F : R m 1... R mp R {+ } lsc semi-algebraic Structure of F : F (x 1,..., x p ) = p f i (x i ) + Q(x 1,..., x p ) i=1 f i are proper lsc Q is C 1. Gauss-Seidel method/ Prox (Auslender 1993) x k+1 1 argmin u R m 1 x k+1 p F (u, x k 2,..., x k p ) + 1 2µ 1 u x 1 k 2... argmin u R mp F (x k+1 1,..., x k+1 p 1, u) + 1 2µ p u x p k 2

29 Proximal Gauss-Seidel splitting F (x 1,..., x p ) = p f i (x i ) + Q(x 1,..., x p ) i=1 f i are proper lsc Q is C 1. x k+1 1 argmin u R m 1 x k+1 p Theorem f 1 (u) + Q(u,..., x p ) + 1 2µ 1 u x 1 k 2 argmin u R mp f p (u) + Q(x 1,..., u) + 1 2µ p u x p k 2 Bounded sequences of the prox Gauss-Seidel method converge

30 Proximal alternating linearized method: PALM (B. Sabach, Teboulle) F (x) = Q(x 1, x 2 ) + f 1 (x 1 ) + f 2 (x 2 ) (B., Sabach, Teboulle, 14) linearization idea: splitting x k+1 1 argmin u R m 1 x k+1 2 argmin u R m 2 f 1 (u) + Q(u, x 2 ) u x µ 1 u x 1 k 2 f 2 (u) + Q(x 1, u) u x µ 2 u x 2 k 2 ( x1 k+1 prox 1 µ k f 1 x1 k 1 ) µ k x1 Q(x1 k, x2 k ), 1 1 ( x2 k+1 prox 1 µ k f 1 x1 k 1 ) µ k x2 Q(x1 k+1, x2 k ). 2 2 Good choice of steps?

31 Proximal alternating linearized method splitting x 1, Q(x 1, ) is C 1 with L 2 (x 1 ) > 0 Lipschitz continuous gradient (same for x 2 with L 1 (x 2 )) ( x1 k+1 prox 1 L 1 (x 2 k f x k ) x k+1 2 prox 1 L 2 (x k 1 ) f 1 ) L 1 (x2 k) x 1 Q(x1 k, x2 k ) ( x k 1 1 L 2 (x k 1 ) x 2 Q(x k+1 1, x k 2 ). ). Example: Several applications in sparse NMF, blind deconvolution (Pesquet, Repetti...), dictionary learning (Gribonval, Malgouyres..). General structure: M a fixed matrix, r, s integers, min { 1 2 AB M 2 : A 0 r, B 0 s } Theorem Bounded sequences of PALM converge

32 Going further in this line? splitting Complex problems à la SQP (with Pauwels) Lagrangian (with Sabach-Teboulle) Alternating have a different nature (with Pauwels-Ngambou) but there are works by Li, Noll-Rondepierre, Druviatskii-Ioffe-Lewis... Fast???

33 A flavour of this complications: A simple SQP/SCQP method splitting We wish to solve min{f (x) : x R n, f i (x) 0} which can be written min f + i C with C = [f i 0, i]. We assume: f is L f Lipschitz i, f i is L fi Lipschitz continuous. Prox operator here are out of reach: too complex!!

34 Moving balls splitting Classical Sequential Quadratic Programming (SQP) idea, replace by some simple approximations. Moving balls method (Auslender-Shefi-Teboulle, Math. Prog., 2010) min x R n f (x k) + f (x k )(x x k )+ L f 2 x x k 2 f 1 (x k ) + f 1(x k )(x x k )+ L f 1 2 x x k f m (x k ) + f m(x k )(x x k )+ L f m 2 x x k 2 0 Bad surprise: x k is not a descent sequence for f + i C.

35 Moving balls Introduce splitting Theorem val(x) = min y R n f (x) + f (x)(y x)+ L f y x 2 2 f 1 (x) + f 1(x)(y x)+ L f 1 2 y x f m (x) + f m(x)(y x)+ L f m 2 y x 2 0 Good surprise: x k is a descent sequence for val. Assume Mangasarian-Fromovitz qualification condition. If x k is bounded it converges to a KKT point of the original problem.

A user s guide to Lojasiewicz/KL inequalities

A user s guide to Lojasiewicz/KL inequalities Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f

More information

Sequential convex programming,: value function and convergence

Sequential convex programming,: value function and convergence Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

From error bounds to the complexity of first-order descent methods for convex functions

From error bounds to the complexity of first-order descent methods for convex functions From error bounds to the complexity of first-order descent methods for convex functions Nguyen Trong Phong-TSE Joint work with Jérôme Bolte, Juan Peypouquet, Bruce Suter. Toulouse, 23-25, March, 2016 Journées

More information

Active sets, steepest descent, and smooth approximation of functions

Active sets, steepest descent, and smooth approximation of functions Active sets, steepest descent, and smooth approximation of functions Dmitriy Drusvyatskiy School of ORIE, Cornell University Joint work with Alex D. Ioffe (Technion), Martin Larsson (EPFL), and Adrian

More information

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods Hedy Attouch, Jérôme Bolte, Benar Svaiter To cite

More information

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Jérôme Bolte Shoham Sabach Marc Teboulle Abstract We introduce a proximal alternating linearized minimization PALM) algorithm

More information

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011 Math. Program., Ser. A (2013) 137:91 129 DOI 10.1007/s10107-011-0484-9 FULL LENGTH PAPER Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward backward splitting,

More information

arxiv: v2 [math.oc] 6 Oct 2016

arxiv: v2 [math.oc] 6 Oct 2016 The value function approach to convergence analysis in composite optimization Edouard Pauwels IRIT-UPS, 118 route de Narbonne, 31062 Toulouse, France. arxiv:1604.01654v2 [math.oc] 6 Oct 2016 Abstract This

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Variational Analysis and Tame Optimization

Variational Analysis and Tame Optimization Variational Analysis and Tame Optimization Aris Daniilidis http://mat.uab.cat/~arisd Universitat Autònoma de Barcelona April 15 17, 2010 PLAN OF THE TALK Nonsmooth analysis Genericity of pathological situations

More information

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Szilárd Csaba László November, 08 Abstract. We investigate an inertial algorithm of gradient type

More information

Tame variational analysis

Tame variational analysis Tame variational analysis Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with Daniilidis (Chile), Ioffe (Technion), and Lewis (Cornell) May 19, 2015 Theme: Semi-algebraic geometry

More information

Convergence of descent methods for semi-algebraic and tame problems.

Convergence of descent methods for semi-algebraic and tame problems. OPTIMIZATION, GAMES, AND DYNAMICS Institut Henri Poincaré November 28-29, 2011 Convergence of descent methods for semi-algebraic and tame problems. Hedy ATTOUCH Institut de Mathématiques et Modélisation

More information

Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs

Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs Jérôme Bolte and Edouard Pauwels September 29, 2014 Abstract In view of solving nonsmooth and nonconvex

More information

A gradient type algorithm with backward inertial steps for a nonconvex minimization

A gradient type algorithm with backward inertial steps for a nonconvex minimization A gradient type algorithm with backward inertial steps for a nonconvex minimization Cristian Daniel Alecsa Szilárd Csaba László Adrian Viorel November 22, 208 Abstract. We investigate an algorithm of gradient

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems

First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems Jérôme Bolte Shoham Sabach Marc Teboulle Yakov Vaisbourd June 20, 2017 Abstract We

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

Block Coordinate Descent for Regularized Multi-convex Optimization

Block Coordinate Descent for Regularized Multi-convex Optimization Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

Composite nonlinear models at scale

Composite nonlinear models at scale Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Spring School on Variational Analysis Paseky nad Jizerou, Czech Republic (April 20 24, 2009)

Spring School on Variational Analysis Paseky nad Jizerou, Czech Republic (April 20 24, 2009) Spring School on Variational Analysis Paseky nad Jizerou, Czech Republic (April 20 24, 2009) GRADIENT DYNAMICAL SYSTEMS, TAME OPTIMIZATION AND APPLICATIONS Aris DANIILIDIS Departament de Matemàtiques Universitat

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

On the convergence of a regularized Jacobi algorithm for convex optimization

On the convergence of a regularized Jacobi algorithm for convex optimization On the convergence of a regularized Jacobi algorithm for convex optimization Goran Banjac, Kostas Margellos, and Paul J. Goulart Abstract In this paper we consider the regularized version of the Jacobi

More information

arxiv: v1 [math.oc] 11 Jan 2008

arxiv: v1 [math.oc] 11 Jan 2008 Alternating minimization and projection methods for nonconvex problems 1 Hedy ATTOUCH 2, Jérôme BOLTE 3, Patrick REDONT 2, Antoine SOUBEYRAN 4. arxiv:0801.1780v1 [math.oc] 11 Jan 2008 Abstract We study

More information

Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/ɛ)

Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/ɛ) 1 28 Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/) Yi Xu yi-xu@uiowa.edu Yan Yan yan.yan-3@student.uts.edu.au Qihang Lin qihang-lin@uiowa.edu Tianbao Yang tianbao-yang@uiowa.edu

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Identifying Active Constraints via Partial Smoothness and Prox-Regularity Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,

More information

A New Look at the Performance Analysis of First-Order Methods

A New Look at the Performance Analysis of First-Order Methods A New Look at the Performance Analysis of First-Order Methods Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with Yoel Drori, Google s R&D Center, Tel Aviv Optimization without

More information

arxiv: v1 [math.oc] 7 Jan 2019

arxiv: v1 [math.oc] 7 Jan 2019 LOCAL MINIMIZERS OF SEMI-ALGEBRAIC FUNCTIONS TIÊ N-SO. N PHẠM arxiv:1901.01698v1 [math.oc] 7 Jan 2019 Abstract. Consider a semi-algebraic function f: R n R, which is continuous around a point x R n. Using

More information

The moment-lp and moment-sos approaches

The moment-lp and moment-sos approaches The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

arxiv: v3 [math.oc] 27 Oct 2016

arxiv: v3 [math.oc] 27 Oct 2016 A Multi-step Inertial Forward Backward Splitting Method for Non-convex Optimization arxiv:1606.02118v3 [math.oc] 27 Oct 2016 Jingwei Liang Jalal M. Fadili Gabriel Peyré Abstract In this paper, we propose

More information

Penalty PALM Method for Sparse Portfolio Selection Problems

Penalty PALM Method for Sparse Portfolio Selection Problems Penalty PALM Method for Sparse Portfolio Selection Problems Yue Teng a, Li Yang b, Bo Yu a and Xiaoliang Song a a School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, P.R. China;

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Algorithmes de minimisation alternée avec frottement: H. Attouch

Algorithmes de minimisation alternée avec frottement: H. Attouch Algorithmes de minimisation alternée avec frottement: Applications aux E.D.P., problèmes inverses et jeux dynamiques. H. Attouch Institut de Mathématiques et de Modélisation de Montpellier UMR CNRS 5149

More information

A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material)

A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material) A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material) Ganzhao Yuan and Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Saudi Arabia

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity.

Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity. Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity. Jérôme BOLTE, Aris DANIILIDIS, Olivier LEY, Laurent MAZET. Abstract The classical Lojasiewicz inequality and its extensions

More information

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization x = prox_f(x)+prox_{f^*}(x) use to get prox of norms! PROXIMAL METHODS WHY PROXIMAL METHODS Smooth

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for

More information

arxiv: v3 [math.oc] 22 Jan 2013

arxiv: v3 [math.oc] 22 Jan 2013 Proximal alternating minimization and projection methods for nonconvex problems. An approach based on the Kurdyka- Lojasiewicz inequality 1 Hedy ATTOUCH 2, Jérôme BOLTE 3, Patrick REDONT 2, Antoine SOUBEYRAN

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

Taylor-like models in nonsmooth optimization

Taylor-like models in nonsmooth optimization Taylor-like models in nonsmooth optimization Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with Ioffe (Technion), Lewis (Cornell), and Paquette (UW) SIAM Optimization 2017 AFOSR,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Inexact alternating projections on nonconvex sets

Inexact alternating projections on nonconvex sets Inexact alternating projections on nonconvex sets D. Drusvyatskiy A.S. Lewis November 3, 2018 Dedicated to our friend, colleague, and inspiration, Alex Ioffe, on the occasion of his 80th birthday. Abstract

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Nonsmooth optimization: conditioning, convergence, and semi-algebraic models

Nonsmooth optimization: conditioning, convergence, and semi-algebraic models Nonsmooth optimization: conditioning, convergence, and semi-algebraic models Adrian Lewis ORIE Cornell International Congress of Mathematicians Seoul, August 2014 1/16 Outline I Optimization and inverse

More information

Fast proximal gradient methods

Fast proximal gradient methods L. Vandenberghe EE236C (Spring 2013-14) Fast proximal gradient methods fast proximal gradient method (FISTA) FISTA with line search FISTA as descent method Nesterov s second method 1 Fast (proximal) gradient

More information

Mathematical methods for Image Processing

Mathematical methods for Image Processing Mathematical methods for Image Processing François Malgouyres Institut de Mathématiques de Toulouse, France invitation by Jidesh P., NITK Surathkal funding Global Initiative on Academic Network Oct. 23

More information

From error bounds to the complexity of first-order descent methods for convex functions

From error bounds to the complexity of first-order descent methods for convex functions From error bounds to the complexity of first-order descent methods for convex functions Jérôme Bolte Trong Phong Nguyen Juan Peypouquet Bruce W. Suter October 28, 205 Dedicated to Jean-Pierre Dedieu who

More information

Subgradient Method. Ryan Tibshirani Convex Optimization

Subgradient Method. Ryan Tibshirani Convex Optimization Subgradient Method Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last last time: gradient descent min x f(x) for f convex and differentiable, dom(f) = R n. Gradient descent: choose initial

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

A Convergent Incoherent Dictionary Learning Algorithm for Sparse Coding

A Convergent Incoherent Dictionary Learning Algorithm for Sparse Coding A Convergent Incoherent Dictionary Learning Algorithm for Sparse Coding Chenglong Bao, Yuhui Quan, Hui Ji Department of Mathematics National University of Singapore Abstract. Recently, sparse coding has

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

A proximal minimization algorithm for structured nonconvex and nonsmooth problems

A proximal minimization algorithm for structured nonconvex and nonsmooth problems A proximal minimization algorithm for structured nonconvex and nonsmooth problems Radu Ioan Boţ Ernö Robert Csetnek Dang-Khoa Nguyen May 8, 08 Abstract. We propose a proximal algorithm for minimizing objective

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

D. Drusvyatskiy, A. D. Ioffe & A. S. Lewis

D. Drusvyatskiy, A. D. Ioffe & A. S. Lewis Transversality and Alternating Projections for Nonconvex Sets D. Drusvyatskiy, A. D. Ioffe & A. S. Lewis Foundations of Computational Mathematics The Journal of the Society for the Foundations of Computational

More information

AN AUGMENTED LAGRANGIAN METHOD FOR l 1 -REGULARIZED OPTIMIZATION PROBLEMS WITH ORTHOGONALITY CONSTRAINTS

AN AUGMENTED LAGRANGIAN METHOD FOR l 1 -REGULARIZED OPTIMIZATION PROBLEMS WITH ORTHOGONALITY CONSTRAINTS AN AUGMENTED LAGRANGIAN METHOD FOR l 1 -REGULARIZED OPTIMIZATION PROBLEMS WITH ORTHOGONALITY CONSTRAINTS WEIQIANG CHEN, HUI JI, AND YANFEI YOU Abstract. A class of l 1 -regularized optimization problems

More information

arxiv: v2 [math.oc] 22 Jun 2017

arxiv: v2 [math.oc] 22 Jun 2017 A Proximal Difference-of-convex Algorithm with Extrapolation Bo Wen Xiaojun Chen Ting Kei Pong arxiv:1612.06265v2 [math.oc] 22 Jun 2017 June 17, 2017 Abstract We consider a class of difference-of-convex

More information

Auxiliary-Function Methods in Optimization

Auxiliary-Function Methods in Optimization Auxiliary-Function Methods in Optimization Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell,

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

Proximal Methods for Optimization with Spasity-inducing Norms

Proximal Methods for Optimization with Spasity-inducing Norms Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms JOTA manuscript No. (will be inserted by the editor) Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms Peter Ochs Jalal Fadili Thomas Brox Received: date / Accepted: date Abstract

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

subgradient trajectories : the convex case

subgradient trajectories : the convex case trajectories : Université de Tours www.lmpt.univ-tours.fr/ ley Joint work with : Jérôme Bolte (Paris vi) Aris Daniilidis (U. Autonoma Barcelona & Tours) and Laurent Mazet (Paris xii) inequality inequality

More information

Expanding the reach of optimal methods

Expanding the reach of optimal methods Expanding the reach of optimal methods Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with C. Kempton (UW), M. Fazel (UW), A.S. Lewis (Cornell), and S. Roy (UW) BURKAPALOOZA! WCOM

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

Semi-algebraic functions have small subdifferentials

Semi-algebraic functions have small subdifferentials Mathematical Programming, Ser. B manuscript No. (will be inserted by the editor) Semi-algebraic functions have small subdifferentials D. Drusvyatskiy A.S. Lewis Received: date / Accepted: date Abstract

More information

Existence of Global Minima for Constrained Optimization 1

Existence of Global Minima for Constrained Optimization 1 Existence of Global Minima for Constrained Optimization 1 A. E. Ozdaglar 2 and P. Tseng 3 Communicated by A. Miele 1 We thank Professor Dimitri Bertsekas for his comments and support in the writing of

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption

BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption Daniel Reem (joint work with Simeon Reich and Alvaro De Pierro) Department of Mathematics, The Technion,

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

On Convergence of Model Parallel Proximal Gradient Algorithm for Stale Synchronous Parallel System

On Convergence of Model Parallel Proximal Gradient Algorithm for Stale Synchronous Parallel System On Convergence of Model Parallel Proximal Gradient Algorithm for Stale Synchronous Parallel System Yi Zhou Yaoliang Yu Wei Dai Yingbin Liang Eric P. Xing Syracuse University Carnegie Mellon University

More information

Descent methods. min x. f(x)

Descent methods. min x. f(x) Gradient Descent Descent methods min x f(x) 5 / 34 Descent methods min x f(x) x k x k+1... x f(x ) = 0 5 / 34 Gradient methods Unconstrained optimization min f(x) x R n. 6 / 34 Gradient methods Unconstrained

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information

c???? Society for Industrial and Applied Mathematics Vol. 1, No. 1, pp ,???? 000

c???? Society for Industrial and Applied Mathematics Vol. 1, No. 1, pp ,???? 000 SIAM J. OPTIMIZATION c???? Society for Industrial and Applied Mathematics Vol. 1, No. 1, pp. 000 000,???? 000 TILT STABILITY OF A LOCAL MINIMUM * R. A. POLIQUIN AND R. T. ROCKAFELLAR Abstract. The behavior

More information