ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS

Size: px
Start display at page:

Download "ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS"

Transcription

1 ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS Mau Nam Nguyen (joint work with D. Giles and R. B. Rector) Fariborz Maseeh Department of Mathematics and Statistics Portland State University SIAM Conference on Imaging Sciences Albuquerque, New Mexico, May 2016

2 Outline 1 Minimizing Differences of Convex Functions-The DCA 2 Nesterov s Smoothing Technique via Convex Analysis 3 The DCA and Nesterov s Smoothing Technique for Weighted Fermat-Torricelli Problems 4 The DCA and Nesterov s Smoothing Technique for Multifacility Location

3 Differences of Convex Functions Consider the problem minimize f (x) := g(x) h(x), x R n where g : R n R and h : R n R are convex. We call g h a DC decomposition of f. g(x) = x 4 h(x) = 3x 2 +x f = g h

4 Examples of DC Programming Fermat- Torricelli Clustering Multifacility Location f (x) = m i=1 c i x a i f (x 1,..., x l ) = mi=1 min k l=1 x l a i 2 f (x 1,..., x l ) = mi=1 min k l=1 x l a i

5 Subgradients and Fenchel Conjugates of Convex Functions Definition Let f : R n (, ] be a convex function and let x dom(f ) := {x R n f (x) < }. A subgradient of f at x is any v R n such that v, x x f (x) f ( x) for all x R n. The subdifferential f ( x) of f at x is the set of all subgradients of f at x. Definition Let f : R n (, ] be a function. The Fenchel conjugate of f is defined by f (x) = sup u R n { x, u g(u)}, x R n.

6 DC Algorithm-The DCA Consider the problem minimize f (x) := g(x) h(x), x R n where g : R n R and h : R n R are convex. The DCA 1. INPUT: x 1 R n, N N for k = 1,..., N do Find y k h(x k ) Find x k+1 g (y k ) end for OUTPUT: x N+1 1 P.D. Tao, L.T.H. An, A d.c. optimization algorithm for solving the trust-region subproblem, SIAM J. Optim. 8 (1998),

7 DC Algorithm-The DCA Theorem Let g, h : R n (, ] be proper lower semicontinuous convex functions. Then v g (y) if and only if v argmin { g(x) y, x x R n}. Moreover, w h(x) if and only if w argmin { h (y) y, x y R n}.

8 DC Algorithm-The DCA The DCA. INPUT: x 1 R n, N N for k = 1,..., N do Find y k h(x k ) or find y k approximately by solving: minimize ψ k (y) := h (y) x k, y, y R n. Find x k+1 g (y k ) or find x k+1 approximately by solving: end for OUTPUT: x N+1 minimize φ k (x) := g(x) x, y k, x R n.

9 An Example of the DCA minimize f (x) = x 4 3x 2 x, x R Then f (x) = g(x) h(x), where g(x) = x 4 and h(x) = 3x 2 + x. We have { ( h(x) = {6x + 1}, g (y) = argmin x 4 yx) } { } 3 y = x R n 4 x k+1 = 3 6xk + 1 4

10 DC Algorithm-The DCA Definition A function h : R n (, ] is called γ-convex (γ 0) if there exists γ 0 such that the function defined by k(x) := h(x) γ 2 x 2, x R n, is convex. If there exists γ > 0 such that h is γ convex, then h is called strongly convex with parameter γ. Theorem Consider the sequence {x k } generated by the DCA. Suppose that g is γ 1 -convex and h is γ 2 -convex. Then f (x k ) f (x k+1 ) γ 1 + γ 2 x k+1 x k 2 for all k N. 2

11 DC Algorithm-The DCA Definition We say that an element x R n is a stationary point of the function f = g h if g( x) h( x). In the case where g and h are differentiable, x is a stationary point of f if and only if f ( x) = g( x) h( x) = 0. Theorem Consider sequence {x k } generated by the DCA. Then {f (x k )} is a decreasing sequence. Suppose further that f is bounded from below and that g is γ 1 -convex and h is γ 2 -convex with γ 1 + γ 2 > 0. If {x k } is bounded, then every subsequential limit of the sequence {x k } is a stationary point of f.

12 The DCA for Clustering Problem formulation 2 : Let a i for i = 1,..., m be target points in R n. m Minimize f (x 1,..., x l ) := min{ x l a i 2 : l = 1,..., k} i=1 over x l R n, l = 1,..., k. 2 L.T.H. An, M.T. Belghiti, P.D. Tao, A new efficient algorithm based on DC programming and DCA for clustering, J. Glob. Optim., 27 (2007),

13 K-Mean Clustering Let x 1, x 2,..., x m be the data points and let c 1,..., c k denote the centers. Randomly select k cluster centers. Assign each data point to the nearest center. Find the average of the data points assigned to each center. Repeat the second step with the obtained new centers in the third step until the centroids no longer move. Although k mean clustering is effective in many situations, it also has some disadvantages. The k-means algorithm does not necessarily find the optimal solution The algorithm is sensitive to the initial selected cluster centers

14 DCA for Clustering and K-Mean 3 Both DCA1 and DCA2 are better than K-means: the objective values given by DCA1 and DCA2 are much smaller than that computed by K-means. DCA2 is the best among the three algorithms: it provides the best solution with the shortest time. DCA2 is very fast and can then handle large-scale problems. m f (x 1,..., x k ) = min x l a i 1 l=1,...,k i=1 This is a nonsmooth nonconvex program for which there are rarely efficient solution algorithms, especially in the large scale setting. 3 L.T.H. An, L.H. Minh, P.D. Tao, New and efficient DCA based algorithms for minimum sum-of-squares clustering, Pattern Recognition, 47 (2014),

15 Nesterov s Smoothing Technique via Convex Analysis Consider the function f 0 (x) := max{ Ax, u φ(u) u Q}, where A is an m n matrix, Q is a nonempty closed bounded convex subset of R m, and φ: R m R is a convex function. Define A = sup{ Ax x 1}. For µ > 0, define f µ (x) := max{ Ax, u φ(u) µ 2 u u 0 2 u Q}, u 0 Q. Then f µ is a C 1 function with l Lipschitz gradient where l = A 2 µ and f µ (x) = A T u µ (x). Here u µ (x) Q is the element for which the maximum is attained in the definition of f µ (x). 4 4 Nesterov: Smooth minimization of non-smooth functions. Math.Program., Ser. A 103, (2005).

16 Nesterov s Smoothing Technique via Convex Analysis f (x) = sup{ x, u f (u) u R n }. Theorem If f is µ strongly convex, then f has a Lipschitz continuous gradient with modulus 1 µ. Moreover, f (x) = u(x), where u(x) is the unique element of R n for which the maximum is attained in the definition of f (x). a a J. Hiriart-Urruty and C. Lemaréchal. Convex Analysis and Minimization Algorithms I & II. Springer, New York, 1993.

17 Nesterov s Smoothing Technique via Convex Analysis Theorem Let A be an n m-matrix. Suppose that ϕ: R m R is a strongly convex function with parameter µ > 0. Then the function µ: R n R defined by f (x) := max{ Ax, u ϕ(u) u Q} is differentiable with f (x) = A T v(x), where v(x) is the unique element for which the maximum is attained in the definition of f (x). The gradient is Lipschitz continuous with constant l = A 2 µ.

18 Nesterov s Smoothing Technique via Convex Analysis We have f (x) = max{ A T x, [ϕ(u) + δ(u; Q)] u R m } = g (A T x), where g(u) := ϕ(u) + δ(u; Q). By the chain rule, f (x) = A T g (A T x) = A T u(ax) = A T v(x). We also have f (x 1 ) f (x 2 ) = A T u(ax 1 ) A T u(ax 2 ) A T u(ax 1 ) u(ax 2 ) A 1 µ Ax 1 Ax 2 A 2 µ x 1 x 2.

19 The Minkowski Gauge Let F be a nonempty closed bounded convex set in R n that contains the origin in its interior. Define the Minkowski gauge associated with F by ρ F (x) := inf{t > 0 x tf}. Note that if F is the closed unit ball in R n, then ρ F (x) = x. Given a nonempty bounded set K, the support function associated with K is given by σ K (x) := sup{ x, y y K }. It follows from the definition of the Minkowski function that ρ F (x) = σ F (x), where F := {y R n x, y 1 for all x F}.

20 Weighted Fermat-Torricelli problem Let a i R n for i = 1,..., m and let c i 0 for i = 1,..., m be real numbers. In the remainder of this section, we study the following generalized version of the Fermat-Torricelli problem: minimize f (x) := m c i ρ F (x a i ), x R n. i=1 The function f has the following obvious DC decomposition: f (x) = c i >0 c i ρ F (x a i ) c i <0( c i )ρ F (x a i ). Let I := {i c i > 0} and J := {i c i < 0} with α i = c i if i I, and β i = c i if i J. Then f (x) = i I α i ρ F (x a i ) j J β j ρ F (x a j ).

21 Weighted Fermat-Torricelli problem Let a i R n for i = 1,..., m and let c i 0 for i = 1,..., m be real numbers. In the remainder of this section, we study the following generalized version of the Fermat-Torricelli problem: minimize f (x) := m c i ρ F (x a i ), x R n. i=1 The function f has the following obvious DC decomposition: f (x) = c i >0 c i ρ F (x a i ) c i <0( c i )ρ F (x a i ). Let I := {i c i > 0} and J := {i c i < 0} with α i = c i if i I, and β i = c i if i J. Then f (x) = i I α i ρ F (x a i ) j J β j ρ F (x a j ).

22 Weighted Fermat-Torricelli problem Theorem Let γ 1 := sup{r > 0 B(0; r) F} and γ 2 := inf{r > 0 F B(0; r)}. Suppose that γ 1 α i > γ 2 β j. i I Then the function f and its approximation f µ have absolute minima. j J

23 Smoothing the Minkowski Gauge Given any a R n and µ > 0, a Nesterov smoothing approximation of ϕ(x) := ρ F (x a) has the representation ϕ µ (x) = 1 2µ x a 2 µ [ x a d( 2 µ ; F ) ] 2. Moreover, ϕ µ (x) = P( x a µ ; F ) and ϕ µ (x) ϕ(x) ϕ µ (x) + µ 2 F 2, where F := sup{ u u F}.

24 Smoothing the Minkowski Gauge Theorem Given any µ > 0, an approximation of the function f is the following DC function: where f µ (x) := g µ (x) h µ (x), x R n, g µ (x) := α i 2µ x a i 2, i I h µ (x) := µα i [ x a i d( 2 µ ; F ) ] 2 + β j ρ F (x a j ). i I j J Moreover, f µ (x) f (x) f µ (x) + µ F 2 2 i I α i for all x R n.

25 The DCA for Weighted Fermat-Torricelli Problems Theorem Given any µ > 0, an approximation of the function f is the following DC function: where f µ (x) := g µ (x) h µ (x), x R n, g µ (x) := α i 2µ x a i 2, i I h µ (x) := µα i [ x a i d( 2 µ ; F ) ] 2 + β j ρ F (x a j ). i I j J Moreover, f µ (x) f (x) f µ (x) + µ F 2 2 i I α i for all x R n.

26 The DCA for Weighted Fermat-Torricelli Problems Algorithm 3. INPUTS: µ > 0, x 1 R n, N IN, F, a 1,..., a m R n, c 1,..., c m R. for k = 1,..., N do Find y k = u k + v k, where u k := i I α i[ xk a i µ P( x k a i µ ; F ) ], v k j J β j ρ F (x k a j ). Find x k+1 = y k + i I α i a i /µ i I α i /µ. OUTPUT: x N+1.

27 Multifacility Location We now consider the multifacility location problem: given m points a 1,..., a m R n, m minimize f (x 1,..., x k ) = min ρ F (x l a i ). l=1,...,k where F is a nonempty, closed and bound convex set containing the origin, and ρ F (x) = inf {t > 0} is the Minkowski gauge. i=1 x tf When F is the closed unit ball IB, the problem becomes m minimize f (x 1,..., x k ) = min x l a i. l=1,...,k i=1 It can be shown that a globally optimal solution exists.

28 DC Decomposition f (x 1,..., x k ) = m min x l a i l=1,...,k i=1 We will utilize the fact that m k f (x 1,..., x k ) = = i=1 l=1 x l a i max r=1,...,k ( m k ) x l a i i=1 l=1 m i=1 k x l a i l=1 l r max r=1,...,k k x l a i. l=1 l r

29 DC Decomposition We obtain the µ-smoothing approximation f µ = g µ h µ, where g µ (x 1,..., x k ) = 1 2µ h µ (x 1,..., x k ) = µ 2 + m m i=1 l=1 m i=1 l=1 k x l a i 2 k max r=1,...,k i=1 [ ( xl a i d µ ( k l=1 l r )] 2 ; IB 1 2µ x l a i 2 µ 2 To implement DCA, we need g µ and h µ... [ ( )] ) xl a 2 i d ; IB µ

30 g µ Using the Frobenius norm in a space of matrices, we express g µ as G µ (X) = m 2µ X 2 1 k X, B + µ 2µ A 2, with the inner product A, B = k l n a lj b lj, X is the k n matrix with rows x 1,..., x k, A is m n with rows a 1,..., a m, and B is k n whose every row is the sum a a m. j G µ (X) = m µ X 1 µ B Gµ(Y ) = 1 m (B + µy ) X G (Y ) iff Y G(X)

31 h µ h µ (x 1,..., x k ) = µ 2 + m i=1 max r=1,...,k k l=1 l r m k i=1 l=1 [ ( )] xl a 2 i d ; IB µ ( 1 2µ x l a i 2 µ 2 [ ( )] xl a 2 ) i d ; IB µ

32 h µ i H 1 (X) = i ( x 1 a i µ P x1 a i IB µ. ( x k a i µ P xk a i IB µ ) ) For each i = 1,..., m, there is some R i such that the R-excluded sum is maximal. If we call this ( sum F ) Ri, then F Ri is a k n matrix whose l th R row is P xl a i IB µ, and R th row is 0. V H 2 (X) iff V = m i=1 F Ri

33 Multifacility Location Algorithm INPUT: X 1 dom g, N N for k = 1,..., N do Compute Y k = H 1 (X k )+V k Compute X k+1 = 1 m (B + µy k) end for OUTPUT: x N+1

34 Clustering

35 Clustering

36 Clustering

37 Results 7500 Fermat Torricelli, R Eight Rings, R2 Algorithm Algorithm f(xk) 6500 f(xk) iteration iteration Multifacility, R2 Multifacility, R10 Multifacility Sets, R Algorithm Algorithm Algorithm f(xk) f(xk) f(xk) iteration iteration iteration

38 References [1] An, Belghiti, Tao: A new efficient algorithm based on DC programming and DCA for clustering. J. Glob. Optim., 27 (2007), [2] Nam, Rector, Giles: Minimizing Differences of Convex Functions and Applications to Facility Location and clustering. arxiv: (2015). [3] Nesterov: Smooth minimization of non-smooth functions. Math.Program., Ser. A 103, (2005). [4] Rockafellar: Convex Analysis, Princeton University Press, Princeton, NJ, 1970.

arxiv: v2 [math.oc] 4 Feb 2016

arxiv: v2 [math.oc] 4 Feb 2016 MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS WITH APPLICATIONS TO FACILITY LOCATION AND CLUSTERING August 8, 2018 Nguyen Mau Nam 1, Daniel Giles 2, R. Blake Rector 3. arxiv:1511.07595v2 [math.oc] 4 Feb 2016

More information

arxiv: v1 [math.oc] 10 Sep 2017

arxiv: v1 [math.oc] 10 Sep 2017 A DC Programming Approach for Solving Multicast Network Design Problems via the Nesterov Smoothing Technique W. GEREMEW, 1 N. M. NAM, A. SEMENOV, 3 V. BOGINSKI, 4, and E. PASILIAO 5 arxiv:1709.0306v1 [math.oc]

More information

Convex and Nonconvex Optimization Techniques for the Constrained Fermat-Torricelli Problem

Convex and Nonconvex Optimization Techniques for the Constrained Fermat-Torricelli Problem Portland State University PDXScholar University Honors Theses University Honors College 2016 Convex and Nonconvex Optimization Techniques for the Constrained Fermat-Torricelli Problem Nathan Lawrence Portland

More information

Generalized Differential Calculus and Applications to Optimization

Generalized Differential Calculus and Applications to Optimization Portland State University PDXScholar Dissertations and Theses Dissertations and Theses Spring 6-1-2017 Generalized Differential Calculus and Applications to Optimization R. Blake Rector Portland State

More information

1 Introduction and Problem Formulation

1 Introduction and Problem Formulation CHARACTERIZATIONS OF DIFFERENTIABILITY, SMOOTHING TECHNIQUES AND DC PROGRAMMING WITH APPLICATIONS TO IMAGE RECONSTRUCTIONS Nguyen Mau Nam 1, Daniel Giles 2, Le Thi Hoai An 3, Nguyen Thai An 4 Abstract.

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

8. Conjugate functions

8. Conjugate functions L. Vandenberghe EE236C (Spring 2013-14) 8. Conjugate functions closed functions conjugate function 8-1 Closed set a set C is closed if it contains its boundary: x k C, x k x = x C operations that preserve

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009

In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009 In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009 Intructor: Henry Wolkowicz November 26, 2009 University of Waterloo Department of Combinatorics & Optimization Abstract

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Math 273a: Optimization Convex Conjugacy

Math 273a: Optimization Convex Conjugacy Math 273a: Optimization Convex Conjugacy Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Convex conjugate (the Legendre transform) Let f be a closed proper

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions LECTURE 12 LECTURE OUTLINE Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions Reading: Section 5.4 All figures are courtesy of Athena

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

Lecture 1: Background on Convex Analysis

Lecture 1: Background on Convex Analysis Lecture 1: Background on Convex Analysis John Duchi PCMI 2016 Outline I Convex sets 1.1 Definitions and examples 2.2 Basic properties 3.3 Projections onto convex sets 4.4 Separating and supporting hyperplanes

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Convex Analysis Background

Convex Analysis Background Convex Analysis Background John C. Duchi Stanford University Park City Mathematics Institute 206 Abstract In this set of notes, we will outline several standard facts from convex analysis, the study of

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information

arxiv: v1 [math.oc] 11 Oct 2012

arxiv: v1 [math.oc] 11 Oct 2012 SOLUTIONS CONSTRUCTIONS OF A GENERALIZED SYLVESTER PROBLEM AND A GENERALIZED FERMAT-TORRICELLI PROBLEM FOR EUCLIDEAN BALLS arxiv:110.314v1 [math.oc] 11 Oct 01 Nguyen Mau Nam, 1 Nguyen Hoang, and Nguyen

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces Emil

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Optimality conditions in global optimization and their applications

Optimality conditions in global optimization and their applications Math. Program., Ser. B DOI 0.007/s007-007-042-4 FULL LENGTH PAPER Optimality conditions in global optimization and their applications A. M. Rubinov Z. Y. Wu Received: 5 September 2005 / Accepted: 5 January

More information

Convex Optimization Conjugate, Subdifferential, Proximation

Convex Optimization Conjugate, Subdifferential, Proximation 1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview

More information

10. Unconstrained minimization

10. Unconstrained minimization Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information

Helly's Theorem and its Equivalences via Convex Analysis

Helly's Theorem and its Equivalences via Convex Analysis Portland State University PDXScholar University Honors Theses University Honors College 2014 Helly's Theorem and its Equivalences via Convex Analysis Adam Robinson Portland State University Let us know

More information

CONVEX ANALYSIS APPROACH TO D. C. PROGRAMMING: THEORY, ALGORITHMS AND APPLICATIONS

CONVEX ANALYSIS APPROACH TO D. C. PROGRAMMING: THEORY, ALGORITHMS AND APPLICATIONS ACTA MATHEMATICA VIETNAMICA Volume 22, Number 1, 1997, pp. 289 355 289 CONVEX ANALYSIS APPROACH TO D. C. PROGRAMMING: THEORY, ALGORITHMS AND APPLICATIONS PHAM DINH TAO AND LE THI HOAI AN Dedicated to Hoang

More information

Lecture: Smoothing.

Lecture: Smoothing. Lecture: Smoothing http://bicmr.pku.edu.cn/~wenzw/opt-2018-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Smoothing 2/26 introduction smoothing via conjugate

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Global Maximum of a Convex Function: Necessary and Sufficient Conditions Journal of Convex Analysis Volume 13 2006), No. 3+4, 687 694 Global Maximum of a Convex Function: Necessary and Sufficient Conditions Emil Ernst Laboratoire de Modélisation en Mécaniue et Thermodynamiue,

More information

Convex Sets and Minimal Sublinear Functions

Convex Sets and Minimal Sublinear Functions Convex Sets and Minimal Sublinear Functions Amitabh Basu Gérard Cornuéjols Giacomo Zambelli April 2010 Abstract We show that, given a closed convex set K containing the origin in its interior, the support

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

On the relations between different duals assigned to composed optimization problems

On the relations between different duals assigned to composed optimization problems manuscript No. will be inserted by the editor) On the relations between different duals assigned to composed optimization problems Gert Wanka 1, Radu Ioan Boţ 2, Emese Vargyas 3 1 Faculty of Mathematics,

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 4. Suvrit Sra. (Conjugates, subdifferentials) 31 Jan, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 4. Suvrit Sra. (Conjugates, subdifferentials) 31 Jan, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 4 (Conjugates, subdifferentials) 31 Jan, 2013 Suvrit Sra Organizational HW1 due: 14th Feb 2013 in class. Please L A TEX your solutions (contact TA if this

More information

A quasisecant method for minimizing nonsmooth functions

A quasisecant method for minimizing nonsmooth functions A quasisecant method for minimizing nonsmooth functions Adil M. Bagirov and Asef Nazari Ganjehlou Centre for Informatics and Applied Optimization, School of Information Technology and Mathematical Sciences,

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3

PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3 PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3 Abstract. This paper presents a systematic study of partial

More information

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space

More information

Convex Sets and Minimal Sublinear Functions

Convex Sets and Minimal Sublinear Functions Convex Sets and Minimal Sublinear Functions Amitabh Basu Gérard Cornuéjols Giacomo Zambelli March 2009, revised January 2010 Abstract We show that, given a closed convex set K containing the origin in

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Expanding the reach of optimal methods

Expanding the reach of optimal methods Expanding the reach of optimal methods Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with C. Kempton (UW), M. Fazel (UW), A.S. Lewis (Cornell), and S. Roy (UW) BURKAPALOOZA! WCOM

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

Composite nonlinear models at scale

Composite nonlinear models at scale Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006 CONVEX OPTIMIZATION VIA LINEARIZATION Miguel A. Goberna Universidad de Alicante Iberian Conference on Optimization Coimbra, 16-18 November, 2006 Notation X denotes a l.c. Hausdorff t.v.s and X its topological

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Unconstrained minimization

Unconstrained minimization CSCI5254: Convex Optimization & Its Applications Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions 1 Unconstrained

More information

Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems

Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Michael Patriksson June 15, 2011 Abstract. We consider a variational inequality problem, where the cost mapping is

More information

Subgradient Method. Ryan Tibshirani Convex Optimization

Subgradient Method. Ryan Tibshirani Convex Optimization Subgradient Method Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last last time: gradient descent min x f(x) for f convex and differentiable, dom(f) = R n. Gradient descent: choose initial

More information

Fenchel Duality between Strong Convexity and Lipschitz Continuous Gradient

Fenchel Duality between Strong Convexity and Lipschitz Continuous Gradient Fenchel Duality between Strong Convexity and Lipschitz Continuous Gradient Xingyu Zhou The Ohio State University zhou.2055@osu.edu December 5, 2017 Xingyu Zhou (OSU) Fenchel Duality December 5, 2017 1

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 23, No., pp. 256 267 c 203 Society for Industrial and Applied Mathematics TILT STABILITY, UNIFORM QUADRATIC GROWTH, AND STRONG METRIC REGULARITY OF THE SUBDIFFERENTIAL D. DRUSVYATSKIY

More information

ON PROXIMAL POINT-TYPE ALGORITHMS FOR WEAKLY CONVEX FUNCTIONS AND THEIR CONNECTION TO THE BACKWARD EULER METHOD

ON PROXIMAL POINT-TYPE ALGORITHMS FOR WEAKLY CONVEX FUNCTIONS AND THEIR CONNECTION TO THE BACKWARD EULER METHOD ON PROXIMAL POINT-TYPE ALGORITHMS FOR WEAKLY CONVEX FUNCTIONS AND THEIR CONNECTION TO THE BACKWARD EULER METHOD TIM HOHEISEL, MAXIME LABORDE, AND ADAM OBERMAN Abstract. In this article we study the connection

More information

From error bounds to the complexity of first-order descent methods for convex functions

From error bounds to the complexity of first-order descent methods for convex functions From error bounds to the complexity of first-order descent methods for convex functions Nguyen Trong Phong-TSE Joint work with Jérôme Bolte, Juan Peypouquet, Bruce Suter. Toulouse, 23-25, March, 2016 Journées

More information

A user s guide to Lojasiewicz/KL inequalities

A user s guide to Lojasiewicz/KL inequalities Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f

More information

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Dual Decomposition.

Dual Decomposition. 1/34 Dual Decomposition http://bicmr.pku.edu.cn/~wenzw/opt-2017-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/34 1 Conjugate function 2 introduction:

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n

More information

5. Subgradient method

5. Subgradient method L. Vandenberghe EE236C (Spring 2016) 5. Subgradient method subgradient method convergence analysis optimal step size when f is known alternating projections optimality 5-1 Subgradient method to minimize

More information

BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption

BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption Daniel Reem (joint work with Simeon Reich and Alvaro De Pierro) Department of Mathematics, The Technion,

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Convex Sets and Functions (part III)

Convex Sets and Functions (part III) Convex Sets and Functions (part III) Prof. Dan A. Simovici UMB 1 / 14 Outline 1 The epigraph and hypograph of functions 2 Constructing Convex Functions 2 / 14 The epigraph and hypograph of functions Definition

More information

On DC. optimization algorithms for solving minmax flow problems

On DC. optimization algorithms for solving minmax flow problems On DC. optimization algorithms for solving minmax flow problems Le Dung Muu Institute of Mathematics, Hanoi Tel.: +84 4 37563474 Fax: +84 4 37564303 Email: ldmuu@math.ac.vn Le Quang Thuy Faculty of Applied

More information

Generalized Pattern Search Algorithms : unconstrained and constrained cases

Generalized Pattern Search Algorithms : unconstrained and constrained cases IMA workshop Optimization in simulation based models Generalized Pattern Search Algorithms : unconstrained and constrained cases Mark A. Abramson Air Force Institute of Technology Charles Audet École Polytechnique

More information

A New Trust Region Algorithm Using Radial Basis Function Models

A New Trust Region Algorithm Using Radial Basis Function Models A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

An Optimal Affine Invariant Smooth Minimization Algorithm.

An Optimal Affine Invariant Smooth Minimization Algorithm. An Optimal Affine Invariant Smooth Minimization Algorithm. Alexandre d Aspremont, CNRS & École Polytechnique. Joint work with Martin Jaggi. Support from ERC SIPA. A. d Aspremont IWSL, Moscow, June 2013,

More information

Convex Analysis and Economic Theory Fall 2018 Winter 2019

Convex Analysis and Economic Theory Fall 2018 Winter 2019 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Fall 2018 Winter 2019 Topic 18: Differentiability 18.1 Differentiable functions In this section I want

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Smoothing Proximal Gradient Method. General Structured Sparse Regression

Smoothing Proximal Gradient Method. General Structured Sparse Regression for General Structured Sparse Regression Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbonell, Eric P. Xing (Annals of Applied Statistics, 2012) Gatsby Unit, Tea Talk October 25, 2013 Outline Motivation:

More information

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY R. T. Rockafellar* Abstract. A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex,

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information