On the acceleration of the double smoothing technique for unconstrained convex optimization problems
|
|
- Suzanna Sims
- 5 years ago
- Views:
Transcription
1 On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities of accelerating the double smoothing technique when solving unconstrained nondifferentiable convex optimization problems. This approach relies on the regularization in two steps of the Fenchel dual problem associated to the problem to be solved into an optimization problem having a differentiable strongly convex objective function with Lipschitz continuous gradient. The doubly regularized dual problem is then solved via a fast gradient method. The aim of this paper is to show how do the properties of the functions in the objective of the primal problem influence the implementation of the double smoothing approach and its rate of convergence. The theoretical results are applied to linear inverse problems by making use of different regularization functionals. Keywords. Fenchel duality, regularization, fast gradient method, image processing AMS subject classification. 90C5, 90C46, 47A5 1 Introduction In this paper we are developing an efficient algorithm based on the double smoothing approach for solving unconstrained nondifferentiable optimization problems of the type (P ) inf f(x) + g(ax)}, (1) x H where H is a real Hilbert space, f : H R and g : R m R are proper, convex and lower semicontinuous functions and A : H R m is a linear continuous operator fulfilling the feasibility condition A(dom f) dom g. The double smoothing technique for solving this class of optimization problems (see [8] for a fully finite-dimensional spaces version of it) assumes to efficiently solve the corresponding Fenchel dual problems and then to recover via an approximately optimal solution of the latter an approximately optimal solution of the primal. This technique, which represents a generalization of the approach developed in [10] for a special class of convex constrained optimization Faculty of Mathematics, Chemnitz University of Technology, D Chemnitz, Germany, radu.bot@mathematik.tu-chemnitz.de. Research partially supported by DFG (German Research Foundation), project BO 516/4-1. Faculty of Mathematics, Chemnitz University of Technology, D Chemnitz, Germany, christopher.hendrich@mathematik.tu-chemnitz.de. Research supported by a Graduate Fellowship of the Free State Saxony, Germany. 1
2 problems, makes use of the structure of the Fenchel dual and relies on the regularization of the latter in two steps into an optimization problem having a differentiable strongly convex objective function with Lipschitz continuous gradient. The regularized dual is then solved by a fast gradient method which gives ( rise( to )) a sequence of dual variables that solve the non-regularized dual problem after O 1 ln 1 iterations, whenever f and g have bounded effective domains. In addition, the norm of the gradient of the regularized dual objective decreases by the same rate of convergence, a fact which ( is( crucial )) in view of reconstructing an approximately optimal solution to (P ) after O 1 ln 1 iterations (see [8]). The first aim of this paper is to show that, whenever g is a strongly convex function, one can obtain the same convergence rate, even without imposing boundedness for its effective domain. Further we show that if, additionally, f is strongly convex or g is everywhere differentiable ( ( )) with a Lipschitz continuous gradient, then the convergence rate becomes O 1 ln 1, while, if these supplementary assumptions are simultaneous ( ( )) fulfilled, then a convergence rate of O ln 1 can be guaranteed. The structure of the paper is the following. The forthcoming section is dedicated to some preliminaries on convex analysis and Fenchel duality. In Section 3 we employ the smoothing technique introduced in [1 14] in order to make the objective of the Fenchel dual problem of (P ) to be strongly convex and differentiable with Lipschitz continuous gradient. In Section 4 we solve the regularized dual problem via an efficient fast gradient method, show how an approximately optimal primal solution can be recovered from a dual iterate and investigate the convergence properties of the sequence of primal optimal solutions. Section 5 addresses the question of how do the properties of the functions in the objective of (P ) influence the implementation of the double smoothing approach and improve its rate of convergence. Finally, in Section 6, we consider an application of the presented approach in image deblurring and solve to this end by a linear inverse problem by using two different regularization functionals. Preliminaries on convex analysis and Fenchel duality Throughout this paper, and =, denote the inner product and, respectively, the norm of the real Hilbert space H, which is allowed to be infinite dimensional. The closure of a set C H is denoted by cl(c), while its indicator function is the function δ C : H R := R ± } defined by δ C (x) = 0 for x C and δ C (x) = +, otherwise. For a function f : H R we denote by dom f := x H : f(x) < + } its effective domain. We call f proper if dom f and f(x) > for all x H. The conjugate function of f is f : H R, f (p) = sup p, x f(x) : x H} for all p H. The biconjugate function of f is f : H R, f (x) = sup x, p f (p) : p H} and, when f is proper, convex and lower semicontinuous, then, according to the Fenchel- Moreau Theorem, one has f = f. The (convex) subdifferential of the function f at x H is the set f(x) = p H : f(y) f(x) p, y x y H}, if f(x) R, and is taken to be the empty set, otherwise. Further, we consider the space R m endowed with the Euclidean inner product and norm, for which we use the same notations as for the real Hilbert space H, since no confusion can arise. By 1 m we denote the vector in R m with all entries equal to 1. For
3 a subset C of R m we denote by ri(c) its relative interior, i.e. the interior of the set C relative to its affine hull. For a linear continuous operator A : H R m the operator A : R m H, defined by A y, x = y, Ax for all x H and all y R m, is its so-called adjoint operator. By id : R m R m, id(x) = x, for all x R m we denote the identity mapping on R m. For a nonempty, convex and closed set C H we consider the projection operator P C : H C defined as x arg min z C x z. Having two functions f, g : H R, their infimal convolution is defined by f g : H R, (f g)(x) = inf y H f(y) + g(x y)} for all x H. The Moreau envelope of parameter γ > 0 of the function f : H R is γ f : H R, defined as the infimal convolution ( ) 1 γ f(x) := f γ (x) = inf f(y) + 1 } x y x H. y H γ The proximal point of f at x R n denotes the unique minimizer of the optimization problem inf f(y) + 1 } y R n x y. For > 0 we say that the function f : H R is -strongly convex, if for all x, y H and all λ (0, 1) it holds f(λx + (1 λ)y) λf(x) + (1 λ)f(y) λ(1 λ) x y. Notice that this is equivalent to saying that x f(x) x is convex. For the optimization problem (P ) we consider the following standing assumptions: f : H R is a proper, convex and lower semicontinuous function with a bounded effective domain, g : R m R is proper, µ-strongly convex (µ > 0) and lower semicontinuous function and A : H R m is a linear and continuous operator fulfilling A(dom f) dom g. Remark 1. Different to the investigations made in [8] in a fully finite-dimensional setting, we strengthen here the convexity assumptions on g (there g was asked to be only proper, convex and lower semicontinuous), but allow in counterpart dom g to be unbounded. The gain of weakening this assumption is emphasized by the applications considered in Section 6. The Fenchel dual problem to (P ) (see, for instance, [5, 6]) reads (D) sup f (A p) g ( p)}. () p R m We denote the optimal objective values of the optimization problems (P ) and (D) by v(p ) and v(d), respectively. The conjugate functions of f and g can be written as f (q) = sup x dom f q, x f(x)} = inf x dom f q, x + f(x)} q H 3
4 and g (p) = sup x dom g p, x g(x)} = inf p, x + g(x)} p x dom g Rm, respectively. According to [1, Theorem 11.9] and [4, Lemma.33], the optimization problems arising in the formulation of both f (q) for all q H and g (p) for all p R m are solvable, fact which implies that dom f = H and dom g = R m, respectively. By writing the dual problem (D) equivalently as the infimum optimization problem inf p R mf (A p) + g ( p)}, one can easily see that the Fenchel dual problem of the latter is sup f (x) g (Ax)}, x H which, by the Fenchel-Moreau Theorem, is nothing else than sup f(x) g(ax)}. x H In order to guarantee strong duality for this primal-dual pair it is sufficient to ensure that (see, for instance, [5, Theorem.1]) 0 ri(a (dom g )+dom f ). As f has full domain, this regularity condition is automatically fulfilled, which means that v(d) = v(p ) and the primal optimization problem (P ) has an optimal solution. Due to the fact that f and g are proper and A(dom f) dom g, this further implies v(d) = v(p ) R. Later we will assume that the dual problem (D) has an optimal solution, too, and that an upper bound of its norm is known. Denote by θ : R m R, θ(p) = f (A p) + g ( p), the objective function of (D). Hence, the dual can be equivalently written as (D) inf θ(p). (3) p Rm The assumptions made on g yield that p g ( p) is differentiable and has a Lipschitz continuous gradient (see Subsection 3.1 for details). However, since in general one can not guarantee the smoothness of p f (A p), the dual problem (D) is a nondifferentiable convex optimization problem. Our goal is to solve this problem efficiently and to obtain from here an optimal solution to (P ). As in ([8],) we are overcoming the nonsatisfactory complexity of subgradient-schemes, i. e. O 1, by making use of smoothing techniques introduced in [1 14]. More precisely, we regularize first the objective function of f (A p) by a quadratic term in order to obtain a smooth approximation of p f (A p). Then we apply a second regularization to the new dual objective and minimize the regularized problem via an appropriate fast gradient scheme (see [8]). This ( will ( allow )) us to solve both optimization problems (D) and (P ) approximately in O 1 ln 1 iterations. More than that, we will show that this rate of convergence can be improved when strengthening the assumptions imposed on f and g. 4
5 3 The double smoothing approach 3.1 First smoothing For a real number > 0 the function p f (A p) = sup x H A p, x f(x)} can be approximated by f (A p) = sup A p, x f(x) } x H x. (4) For each p R m the maximization problem which occurs in the formulation of f (A p) has a unique solution (see, for instance, [4, Lemma.33]), fact which implies that f (A p) R. For all p R m one can express the above regularization of the conjugate by means the Moreau envelope of f as follows f (A p) = sup A p, x f(x) } x x H = inf x H f(x) + A p x } A p ( = 1 A ) p f A p. Consequently, one can transfer the differentiability properties of the Moreau envelope (see [1, Proposition 1.9]) to p (f A )(p). For all p R m we have (f A )(p) = A ( 1 A ) p f thus AA p = A ( ( A )) p x f,p (f A )(p) = Ax f,p, AA p = Ax f,p, where x f,p H is the proximal point of 1 f at A p, namely the unique element in H fulfilling (see [1, Proposition 1.9]) 1 f ( A ) p = f(x f,p ) + A p x f,p By taking into account the nonexpansiveness of the proximal point mapping (see [1, Proposition 1.7]), for p, q R m it holds thus A (f A )(p) (f A )(q) = Ax f,p Ax f,q A x f,p x f,q A A p A q. A p q, is the Lipschitz constant of p (f A )(p). Coming now to the function p g ( p) = (g id)(p), let us notice first that, since g is proper, µ-strongly convex and lower semicontinous, g is differentiable and g is Lipschitz continuous with Lipschitz constant 1 µ (cf. [1, Theorem 18.15]). Thus 5
6 (g id) is Fréchet differentiable, too, and its gradient is Lipschitz continuous with Lipschitz constant 1 µ. By denoting x g,p := g ( p) = (g id)(p), one has that p g(x g,p ) or, equivalently, 0 ( p, + g)(x g,p ), which means that x g,p is the unique optimal solution (see [4, Lemma.33]) of the optimization problem inf x Rm p, x + g(x)}. Remark. If f is -strongly convex ( > 0), then there is no need to apply the first regularization for p f (A p), as this function is already Fréchet differentiable with a Lipschitz continuous gradient having a Lipschitz constant given by A. Indeed, the - strong convexity of f implies that f is Fréchet differentiable with Lipschitz continuous gradient having a Lipschitz constant given by 1 (see [1, Theorem 18.15]). Hence, for all p, q R m, we have Taking (f A )(p) (f A )(q) = A f (A p) A f (A q) A x f,p := f (A p), A p A q A p q. one has that 0 (f A p, )(x f,p ), which means that x f,p is the unique optimal solution (see [4, Lemma.33]) of the optimization problem inf f(x) x H A p, x }. By denoting D f := sup x } : x dom f R we can relate f A and its smooth approximation f A as follows. Proposition 3. For all p R m it holds Proof. For p R m one has f (A p) f (A p) f (A p) + D f. f (A p) = A p, x f,p f(x f,p ) x f,p A p, x f,p f(x f,p ) f (A p) sup A p, x f(x) } } x dom f x + sup x dom f x = f (A p) + D f. 6
7 For > 0 let θ : R m R be defined by θ (p) = f (A p) + g ( p). The function θ is differentiable with a Lipschitz continuous gradient θ (p) = (f A )(p) + (g id)(p) = Ax f,p x g,p p R m, having as Lipschitz constant L() := A + 1 µ. In consideration of Proposition 3 we get θ (p) θ(p) θ (p) + D f p R m. (5) In order to reconstruct an approximately optimal solution to the primal optimization problem (P ) it is not sufficient to ensure the convergence of θ( ) to v(d), but we also need good convergence properties for the decrease of θ ( ) (cf. [8, 10]). 3. Second smoothing In the following, a second regularization is applied to θ, as done in [8, 10], in order to make it strongly convex, fact which will allow us to use a fast gradient scheme with a good convergence rate for the decrease of θ ( ). Therefore, adding the strongly convex function to θ, for some positive real number, gives rise to the following regularization of the objective function θ, : R m R, θ, (p) := θ (p) + p = f (A p) + g ( p) + p, which is obviously -strongly convex. We further deal with the optimization problem inf θ,(p). (6) p R m By taking into account [4, Lemma.33], the optimization problem (6) has a unique optimal solution, while the function θ, is differentiable and for all p R m it holds θ, (p) = (θ ( ) + ) (p) = Ax f,p x g,p + p. This gradient is Lipschitz continuous with constant L(, ) := A + 1 µ +. Remark 4. If θ is -strongly convex, then there is no need to apply the second regularization, as this function is already endowed with the properties of θ,. 4 Solving the doubly regularized dual problem 4.1 A fast gradient method In the forthcoming sections we denote by p DS the unique optimal solution of the optimization problem (6) and by θ, := θ, (p DS ) its optimal objective value. Further, we denote by p R m an optimal solution to the dual optimization problem (D) and we assume that the upper bound p R (7) 7
8 is available for some nonzero R R +. Furthermore, as in [8,10], we make use of the following fast gradient method (see [11, Algorithm..11]) Initialization : For k 0 : Set w 0 = p 0 := 0 R m 1 Set p k+1 := w k L(, ) θ,(w k ). L(, ) Set w k+1 := p k+1 + (p k+1 p k ) L(, ) + (FGM) for minimizing the optimization problem (6), which has a strongly convex and differentiable optimization function with a Lipschitz continuous gradient. By taking into account [11, Theorem..3] we obtain a sequence (p k ) k 0 R m satisfying θ, (p k ) θ, (p DS) (θ, (0) θ, (p DS) + ) p DS e k L(,) (8) = (θ (0) θ (p DS)) e k L(,) k 0. (9) Since p DS solves (6), we have θ,(p DS ) = 0 and, therefore (see [11, Theorem.1.5]), θ, (p k ) (9) L(, )(θ (0) θ (p DS)) e k L(,) k 0. (10) Due to the -strong convexity of θ,, [11, Theorem.1.8] states p k p DS (θ,(p k ) θ, (p DS)) (9) (θ (0) θ (p DS)) e k L(,) k 0. (11) We first prove that the rates of convergence ( ( )) for the decrease of θ(p k ) θ(p ) and θ (p k ) coincide, being equal to O 1 ln 1, and that they can be improved when f and/or g fulfill additional assumptions. We also show how -optimal solutions to the primal problem (P ) can be recovered from the sequence of dual variables (p k ) k 0. To this aim we will act in the lines of the considerations from [8, 10] and this is why we refer the reader to these papers for detailed argumentations in this sense. 4. Convergence of θ(p k ) to θ(p ) Using again [11, Theorem.1.8] we obtain p DS (θ,(0) θ, (p DS)) = which implies that (θ (0) θ (p DS) p DS ), p DS 1 (θ (0) θ (p DS)). (1) 8
9 In order to estimate the function values, we notice that formula (9) states θ (p k ) θ (p DS) (θ (0) θ (p DS)) e k L(,) + The last term in the inequality above can be estimated via ( p DS p k ) k 0. p DS p k p DS p k ( p DS + p k p DS ) (11),(1) (θ (0) θ (p DS)) e k L(,) + (θ (0) θ (p DS)) e k + Thus we obtain for all k 0 ( θ (p k ) θ (p DS) (θ (0) θ (p DS)) (θ (0) θ (p DS)) e k e k ( + ) (θ (0) θ (p DS)) e k L(,) L(,). L(,) + (1 + ) e k L(,) ) L(,). (13) Further, we have θ (0) (5) θ(0), θ (p DS ) (5) θ(p DS ) D f θ(p ) D f and, from here, Hence, using (5), θ (0) θ (p DS) θ(0) θ(p ) + D f. (14) θ (p DS) θ (p DS) + p DS θ (p ) + p θ(p ) + p and from here it follows for all k 0 θ(p k ) θ(p ) D f + p + θ (p k ) θ (p DS) (13) D f + R + ( + ) (θ(0) θ(p ) + D f ) e k L(,). (15) Next we fix > 0. In order to get θ(p k ) θ(p ) after a certain amount of iterations k, we force all three terms in (15) to be less than or equal to 3. To this end we choose first := () = With these new parameters we can simplify (15) to and := () = 3D f 3R. (16) θ(p k ) θ(p ) 3 + ( + ( ) θ(0) θ(p ) + ) e k L(,) k 0, 3 9
10 thus, the second term in the expression on the right-hand side of the above estimate determines the number of iterations needed to obtain -accuracy for the dual objective function θ. Indeed, we have 3 ( + ( ) θ(0) θ(p ) + ) e k L(,) 3 k ( ( 3( + ) θ(0) θ(p L(, ) ln ) + 3) ) ( ( L(, ) 3( + ) θ(0) θ(p ) + )) 3 k ln. (17) Noticing that L(, ) = A + 1 µ + 1 = 1 ( 9 A D f R ) + 3R µ +, ( ( in order to obtain an approximately optimal solution to (D), we need k = O 1 ln 1 iterations. 4.3 Convergence of θ (p k ) to 0 Guaranteeing -optimality for the objective value of the dual is not sufficient for solving the primal optimization problem with a good convergence rate, as we need at least the same convergence rate for the decrease of θ (p k ) = Ax f,pk x g,pk to 0 in order to ensure primal feasibility. Within this section we show that this is actually the case (see also [10]). It holds θ (p k ) = θ, (p k ) p k θ, (p k ) + p k k 0. The first term on the right hand side above can be estimated using (10), namely θ, (p k ) L(, )(θ (0) θ (p DS )) k e L(,) k 0, while for the second term, we use Moreover, we notice that p k = p k p DS + p DS p k p DS + p DS (11) (θ (0) θ (p k DS )) e L(,) + p DS k 0. (18) θ(p ) + p (5) θ(p DS) D f + p DS θ(p ) D f + p DS, which implies that p DS p + D f. Hence, p DS p + D f (16) = p (16) = )) p + R (7) R, (19)
11 which, combined with the previous estimates, (14) and (16), provides for all k 0 ( θ (p k ) L(, ) + ) (θ (0) θ (p DS )) k e L(,) + R ( L(, ) + ) ( θ(0) θ(p ) + ) e k 3 L(,) + 3R. (0) For > 0 fixed, the first term in (0) decreases by the iteration counter k, and, in order to ensure θ (p k ) R, we need ( L(, L(, ) k ln 3R ) ) + (θ(0) θ(p ) + 3 ) (3 (1) ) iteration steps. Summarizing, by taking into account (16), we can ensure θ(p k ) θ(p ) and θ (p k ) R ( ( )) in k = O 1 ln 1 iterations. () 4.4 Constructing an approximate primal solution Since our main focus is to solve the primal optimization problem (P ), we prove as follows that the sequences (x f,pk ) k 0 dom f and (x g,pk ) k 0 dom g constructed in Subsection 3.1 contain all the information one needs to recover approximately optimal solutions to (P ) (see [8,10] for a similar approach). Let k := k() be the smallest index satisfying (17) and (1), thus guaranteeing (). Since θ (p k ) θ(p ) (5) θ(p k ) θ(p ) () and θ (p k ) θ(p ) (5) θ(p k ) D f θ(p ) (16) = θ(p k ) θ(p ) 3 3, it holds θ (p k ) θ(p ) for all k 0. Further, we have θ (p k ) = f (A p k ) + g ( p k ) = p k, Ax f,pk f(x f,pk ) x f,p k p k, x g,pk g(x g,pk ) and from here (notice that v(d) = θ(p )) f(x f,pk ) + g(x g,pk ) v(d) = p k, θ (p k ) + (θ(p ) θ (p k )) x f,p k k 0. It follows f(x f,pk ) + g(x g,pk ) v(d) p k θ (p k ) + θ(p ) θ (p k ) + x f,p k p k θ (p k ) + + D f (16) p k θ (p k ) + () R p k + k 0. 11
12 In the light of (18) and (19), it holds p k (16) 3 = R ( θ(0) θ(p ) + ) e k 3 ( θ(0) θ(p ) + ) e k 3 L(,) + R L(,) + R. Finally, we obtain ( f(x f,pk ) + g(x g,pk ) v(d) 3 θ(0) θ(p ) + ) e k L(,) + ( + ), 3 which, due to the choice of k = k(), fulfills f(x f,pk ) + g(x g,pk ) v(d) 5. (3) By taking into account weak duality, i. e. v(d) v(p ), we conclude that x f,pk dom f and x g,pk dom g can be seen as approximately optimal solutions to (P ). 4.5 Existence of an optimal solution We close this section by a convergence analysis on the two sequences of primal approximate optimal solutions when ε converges to zero. To this end let ( n ) n 0 R + be a decreasing sequence of positive scalars with lim n n = 0. For each n 0, the double smoothing algorithm (FGM) with smoothing parameters n and n given by (16) requires at least k = k( n ) iterations to fulfill (17) and (1). For n 0 we denote x n := x f,pk(n) dom f and y n := x g,pk(n) dom g. Due to the boundedness of dom f, its closure cl(dom f) is weakly compact (see [1, Theorem 3.3]) and there exists a subsequence (x nl ) l 0 and x H such that x nl weakly converges to x cl(dom f) when l +. Since A : H R m is linear and continuous, the sequence Ax nl will converge to Ax when l +. In view of relation () we get nl 0 Ax nl y nl l 0. (4) R This means that the sequence (y nl ) l 0 dom g is obviously bounded, hence there exists a subsequence of it (still denoted by (y nl ) l 0 ) and an element ȳ cl(dom g) such that y nl y when l +. Taking l + in (4) it follows Ax = y. Furthermore, due to (3), we have f(x nl ) + g(y nl ) v(d) + 5 nl l 0 and, by using the lower semicontinuity of f and g and [1, Theorem 9.1], we obtain } f(x) + g(ax) lim inf f(x nl ) + g(y nl ) l lim v(d) + 5 nl } = v(d) v(p ). l Since v(p ) R, we have x dom f and Ax dom g, which yields that x is an optimal solution to (P ). 1
13 5 Improving the convergence rates In this section we investigate how additional assumptions on the functions f and/or g influence the implementation of the double smoothing approach, its rate of convergence and eventually allow a weakening of the standing assumptions made in the paper. In all three situations addressed here the construction of the approximate primal solutions and the proof of the existence of an optimal solution to the primal problem can be made in analogy to the subsections 4.4 and 4.5, respectively. It is worth to notice that the additional assumptions furnish an improvement of the complexity, which is motivated by the fact that constants of strong convexity and/or Lipschitz constants of the gradient are already available, thus they do not need to be in the smoothing process constructed as functions of the level of accuracy ε. 5.1 The case f is strongly convex Additionally to the standing assumptions we assume first that the function f : H R is -strongly convex ( > 0), but remove the boundedness assumption on its domain. In this situation the first smoothing, as done in Subsection 3.1, can be omitted and the fast gradient method (FGM) can be applied to the minimization problem inf θ (p), (5) p R m where θ : R m R, θ := f (A p)+g ( p)+ p, with > 0, is a -strongly convex and differentiable function with Lipschitz continuous gradient. The Lipschitz constant of θ is L() := A + 1 µ +. This gives rise to a sequence (p k ) k 0 satisfying θ (p k ) θ (p DS) (8) (θ (0) θ (p DS) + p DS ) e k L() (6) = (θ(0) θ(p DS)) e k L() k 0, (7) where p DS follows denotes the unique optimal solution of the problem (5). Thus, from (7) it θ (p k ) L() (θ(0) θ(p DS)) e k L() (8) and p k p DS (θ (p k ) θ (p DS)) (θ(0) θ(p DS)) e k L() k 0. (9) Additionally, in all iterations k 0 we have and p DS 1 (θ(0) θ(p DS)) (30) p DS p k p k p DS ( p DS + p k p DS ) (9),(30) + (θ(0) θ(p DS)) e k L(), 13
14 thus θ(p k ) θ(p DS) (7) (θ(0) θ(p DS)) e k L() + ( p DS p k ) ( (θ(0) θ(p DS)) e k L() + (1 + ) e k ) L() ( + ) (θ(0) θ(p DS)) e k L() k 0. We denote by p R m an optimal solution to the dual optimization problem (D) and assume that the upper bound p R is available for some nonzero R R +. Thus, since θ(p DS ) θ (p DS ) θ (p ) = θ(p ) + p, we obtain for all k 0 θ(p k ) θ(p ) p + θ(p k ) θ(p DS) R + ( + ) (θ(0) θ(p )) e k L(). Hence, when ε > 0, in order to guarantee -accuracy for the dual objective function we can force both terms in the above estimate to be less than or equal to. Thus, by taking := () = R, this time we will need to this end, in contrast to (17), k L() ( ( + ) (θ(0) θ(p ) ln )), ( ( )) i. e. k = O 1 ln 1 iterations. Further, using (8) we have θ (p k ) On the other hand, using p k p k p DS + p DS (9) L() (θ(0) θ(p )) e k L() k 0. (θ(0) θ(p )) e k L() + p DS, and the relation θ(p ) + p DS θ (p DS ) θ (p ) = θ(p ) + p, which yields p DS p R, we obtain θ(p k ) θ (p k ) + p k ( = L() + ) (θ(0) θ(p )) e k ( L() + ) (θ(0) θ(p )) e k L() + R L() + R k 0. Therefore, in order to guarantee Ax f,pk x g,pk = θ(p k ) R, we need k = ( ( )) O 1 ln 1 iterations, which coincides with the convergence rate for the dual objective values. 14
15 5. The case g is everywhere differentiable with Lipschitz continuous gradient Throughout this subsection, additionally to the standing assumptions, we assume that g : R m R has full domain and it is differentiable with 1 -Lipschitz continuous gradient, for > 0. In this situation the second smoothing, as done in Subsection 3., can be omitted and the fast gradient method (FGM) can be applied to the minimization problem inf θ (p), (31) p R m where θ : R m R, θ := f (A p) + g ( p), is -strongly convex due to [1, Theorem 18.15] and differentiable with Lipschitz continuous gradient. The Lipschitz constant of θ is L() := A + 1 µ. This gives rise to a sequence (p k ) k 0 satisfying θ (p k ) θ (p DS) (θ (0) θ (p DS) + ) p DS e k L() (3) and (θ (0) θ (p DS) e k L() (33) θ (p k ) 4L() (θ (0) θ (p DS)) e k L() k 0, (34) where p DS denotes the unique optimal solution of the problem (31). We denote by p R m the unique optimal solution of the dual optimization problem (D) and would like to notice that in this context it is not necessary to know an upper bound of the norm of the dual optimal solution. Since θ (0) (5) θ(0) and θ (p DS ) (5) θ(p DS ) D f θ(p ) D f, we obtain θ (0) θ (p DS) θ(0) θ(p ) + D f. (35) On the other hand, since θ (p k ) θ (p DS ) (5) θ(p k ) D f θ(p ), it follows θ(p k ) θ(p ) D f + θ (p k ) θ (p DS) D f + (θ(0) θ(p ) + D f ) e k L() k 0. Hence, when ε > 0, in order to guarantee -optimality for the dual objective, we force both terms in the above estimate less than or equal to. By taking in contrast to (17), we need := () = ( ( L() 4 θ(0) θ(p k ln ) + ) ), D f, (36) 15
16 ( ( )) i. e. k = O 1 ln 1 iterations to obtain -accuracy for the dual objective values. From (34) we obtain as well θ (p k ) L()(θ (0) θ (p DS )) k e L() (35) L()(θ(0) θ(p ) + D f ) e k L() ( (36) = L() θ(0) θ(p ) + ) e k L() k 0. Therefore, ( ( )) in order to guarantee Ax f,pk x g,pk = θ (p k ), we need k = O 1 ln 1 iterations, which is the same convergence rate as for the dual objective values. 5.3 The case f is strongly convex and g is everywhere differentiable with Lipschitz continuous gradient The third favorable situation which we address is when, additionally to the standing assumptions, the function f : H R is -strongly convex ( > 0) however, without assuming anymore that dom f is bounded, and the function g : R m R has full domain and it is differentiable with 1 -Lipschitz continuous gradient ( > 0). In this case both the first and second smoothing can be omitted and the fast gradient method (FGM) can be applied to the minimization problem inf θ(p), (37) p Rm where θ : R m R, θ := f (A p) + g ( p), is a -strongly convex and differentiable function with Lipschitz continuous gradient. The Lipschitz constant of θ is L := A + 1 µ. We denote by p R m the unique optimal solution of (D), for which it is not necessary to know an upper bound of its norm. This gives rise to a sequence (p k ) k 0 satisfying θ(p k ) θ(p ) (8) (θ(0) θ(p ) + ) p e k L (θ(0) θ(p )) e k L and From here, when ε > 0, we have while θ(p k ) 4L (θ(0) θ(p )) e k L k 0. (θ(0) θ(p )) e k L ( L (θ(0) θ(p ) k ln )), ( ) L(θ(0) θ(p )) e k L L(θ(0) θ(p L k ln )). In conclusion, in order to guarantee ε-accuracy ( ( )) for the dual objective values and for the decrease of θ( ) to 0, we need O ln 1 iterations. 16
17 6 Two examples in image processing In this section we are solving a linear inverse problem which arises in the field of signal and image processing via the double smoothing algorithm developed in this paper. For a given matrix A R n n describing a blur operator and a given vector b R n representing the blurred and noisy image the task is to estimate the unknown original image x R n fulfilling Ax = b. To this end we make use of two regularization functionals with different properties. 6.1 An l 1 regularization problem We start by solving the l 1 regularized convex optimization problem } (P ) inf Ax b + λ x x S 1, where S R n is an n-dimensional cube representing the range of the pixels and λ > 0 the regularization parameter. The problem to be solved can be equivalently written as (P ) inf f(x) + g(ax)}, x Rn for f : R n R, f(x) = λ x 1 + δ S (x) and g : R n R, g(y) = y b. Thus f is proper, convex and lower semicontinuous with bounded domain and g is a - strongly convex function with full domain, differentiable everywhere and with Lipschitz continuous gradient having as Lipschitz constant. This means that we are in the setting of Subsection 5.. By making use of gradient methods, both the iterative shrinkage-tresholding algorithm (ISTA) (see [9])( and ) its accelerated ( ) variant FISTA (see [, 3]) solve the optimization problem (P ) in O 1 and O 1 iterations, respectively, whereas the convergence ( ( )) rate of our method is O 1 ln 1. Since each pixel furnishes a greyscale value which is between 0 and 55, a natural choice for the convex set S would be the n-dimensional cube [0, 55] n R n. In order to reduce the Lipschitz constant which appears in the developed approach, we scale the pictures to which [ ] refer within this subsection such that each of their pixels ranges in the interval 0, We concretely look at the cameraman test image, which is part of the image processing toolbox in Matlab. The dimension of the vectorized and scaled cameraman test image is n = 56 = By making use of the Matlab functions imfilter and fspecial, this image is blurred as follows: 1 H=f s p e c i a l ( gaussian, 9, 4 ) ; % gaussian blur o f s i z e 9 times 9 % and standard d e v i a t i o n 4 3 B=i m f i l t e r (X,H, conv, symmetric ) ; % B=observed b l u r r e d image 4 % X=o r i g i n a l image In row 1 the function fspecial returns a rotationally symmetric Gaussian lowpass filter of size 9 9 with standard deviation 4. The entries of H are nonnegative and their sum 17
18 adds up to 1. In row 3 the function imfilter convolves the filter H with the image X R and outputs the blurred image B R The boundary option "symmetric" corresponds to reflexive boundary conditions. Thanks to the rotationally symmetric filter H, the linear operator A R n n given by the Matlab function imfilter is symmetric, too. By making use of the real spectral decomposition of A, it shows that A = 1. After adding a zero-mean white Gaussian noise with standard deviation 10 4, we obtain the blurred and noisy image b R n which is shown in Figure 6.1. original blurred and noisy Figure 6.1: The cameraman test image The dual optimization problem in minimization form is (D) inf p R n f (A p) + g ( p)} and, due to the fact that g has full domain, strong duality for (P ) and (D) holds, i. e. v(p ) = v(d) and (D) has an optimal solution (see, for instance, [5, 6]). By taking into consideration (36), the smoothing parameter is taken as := D f (38) for D f = sup x [ ] n } : x 0, 1 10 = 37.68, while the accuracy is chosen to be = 0.3 and the regularization parameter is set to λ = e-6. We show next that the sequences of approximate primal solutions (x f,pk ) k 0 and (x g,pk ) k 0 can be easily calculated. Indeed, for k 0 we have x f,pk = arg min x [0, 10] 1 n = arg min x [0, 1 10] n λ x 1 + A p k } x n [ λ x i + ( (A ) p k ) ]} i x i i=1 and, in order to determine it, we need to solve the one-dimensional convex optimization 18
19 problem inf x i [0, 1 10] λx i + ( (A ) p k ) i x i }, for i = 1,..., n, which has as unique optimal solution P [0, 1 10] x f,pk = P [0, 1 10] n On the other hand, for all k 0 we have x g,pk = arg min x R n ( 1 (A p k λ1 n )). ( 1 ((A p k ) i λ)). Thus, p k, x + g(x)} = arg min p k, x + x b } = b 1 x R n p k. ISTA 50 = e 0 FISTA 50 = e 03 DS 50 = e 03 ISTA 100 = e 03 FISTA 100 = e 03 DS 100 = e 03 Figure 6.: Iterations of ISTA, FISTA and double smoothing (DS) for solving (P ) Figure 6. shows the iterations 50 and 100 of ISTA, FISTA and the double smoothing (DS) approach. The objective function values at iteration k are denoted by ISTA k, FISTA k and, respectively, DS k (e. g. DS k := f(x f,pk ) + g(ax f,pk )). All in all, the visual quality of the restored cameraman image after 100 iterations, when using FISTA or DS, is quite comparable, whereas the recovered image by ISTA is still blurry. However, a valuable tool for measuring the quality of these images is the so-called improvement in 19
20 signal-to-noise ratio (ISNR), which is defined as ISNR(k) = 10 log 10 ( x b x x k where x, b and x k denote the original, observed and estimated image at iteration k, respectively. Figure 6.3 shows the evolution of the ISNR values when using DS, FISTA and ISTA to solve (P ). ) 8 7 DS FISTA ISTA 6 5 ISNR Iterations k Figure 6.3: Improvement in signal-to-noise ratio (ISNR) 6. An l l 1 regularization problem The second convex optimization problem we solve is } (P ) inf Ax b + λ( x + x x S 1 ), where S R n is the n-dimensional cube [0, 1] n representing the pixel range, λ > 0 the regularization parameter and + 1 the regularization functional, already used in [7]. The problem to be solved can be equivalently written as (P ) inf f(x) + g(ax)}, x Rn for f : R n R, f(x) = λ( x + x 1 ) + δ S (x) and g : R n R, g(y) = y b. Thus f is proper, λ-strongly convex and lower semicontinuous with bounded domain and g is a -strongly convex function with full domain, differentiable everywhere and with Lipschitz continuous gradient having as Lipschitz constant. This time we are in the setting of the Subsection 5.3, the Lipschitz constant of the gradient of θ : R n R, θ(p) = f (A p)+g ( p), being L = 1 λ + 1. By applying the double smoothing approach ( ( )) one obtains a rate of convergence of O ln 1 for solving (P ). In this example we take a look at the blobs test image shown in Figure 6.4 which is also part of the image processing toolbox in Matlab. The picture undergoes the 0
21 original blurred and noisy Figure 6.4: The 7 39 blobs test image same blur as described in the previous section. Since our pixel range has changed, we now use additive zero-mean white Gaussian noise with standard deviation 10 3 and the regularization parameter is changed to λ = e-5. We calculate next the sequences of approximate primal solutions (x f,pk ) k 0 and (x g,pk ) k 0. Indeed, for k 0 we have and } x f,pk = arg min x [0,1] n λ x + λ x 1 A p k, x n = arg min i=1,...,n x i [0,1] x g,pk = arg min x R n i=1 [ ] } ( ) 1 (A p k ) i x i + λx i + λx i = P [0,1] n λ (A p k λ1 n ). p k, x + g(x)} = arg min p k, x + x b } = b 1 x R n p k. Figure 6.5 shows the iterations 50 and 100 of ISTA, FISTA and the double smoothing (DS) technique together with the corresponding function values denoted by ISTA k, FISTA k or DS k. As before, the function values of FISTA are slightly lower than those of DS, while ISTA is far behind these methods, not only from theoretical point of view, but also as it can be detected visually. Figure 6.6 displays the improvement in signalto-noise ration for ISTA, FISTA and DS and it shows that DS outperforms the other two methods from the point of view of the quality of the reconstruction. 7 Conclusions In this article we investigate the possibilities of accelerating the double smoothing technique when solving unconstrained nondifferentiable convex optimization problems. This method, which assumes the minimization of the doubly regularized Fenchel dual objective, allows in( the ( most )) general case to reconstruct an approximately optimal primal solution in O 1 ln 1 iterations. We show that under some appropriate assumptions for the functions involved in( the formulation ( )) of the problem ( ( to )) be solved this convergence rate can be improved to O 1 ln 1, or even to O ln 1. 1
22 ISTA 50 = e+00 FISTA 50 = e 01 DS 50 = e 01 ISTA 100 = e+00 FISTA 100 = e 01 DS 100 = e 01 Figure 6.5: Iterations of ISTA, FISTA and double smoothing (DS) for solving (P ) References [1] H.H. Bauschke and P.L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics, Springer, 011. [] A. Beck and M. Teboulle. A fast iterative shrinkage-tresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, (1):183 0, 009. [3] A. Beck and M. Teboulle. Gradient-based algorithms with applications to signal recovery problems. In: Y. Eldar and D. Palomar (eds.), Convex Optimization in Signal Processing and Communications, pp Cambribge University Press, 010. [4] J.F. Bonnans and A. Shapiro. Perturbation Analysis of Optimization Problems. Springer Series in Operations Research and Financial Engineering, 000. [5] R.I. Boţ. Conjugate Duality in Convex Optimization. Lecture Notes in Economics and Mathematical Systems, Vol. 637, Springer-Verlag Berlin Heidelberg, 010. [6] R.I. Boţ, S.-M. Grad and G. Wanka. Duality in Vector Optimization. Springer- Verlag Berlin Heidelberg, 009. [7] R.I. Boţ and T. Hein. Iterative regularization with general penalty term theory and application to L 1 - and T V -regularization. Inverse Problems 8(10): (pp19), 01.
23 ISNR DS FISTA ISTA Iterations k Figure 6.6: Improvement in signal-to-noise ratio (ISNR) [8] R.I. Boţ and C. Hendrich. A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems. arxiv: v1 [math.oc], 01. [9] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11): , 004. [10] O. Devolder, F. Glineur and Y. Nesterov. Double Smoothing Technique for Large- Scale Linearly Constrained Convex Optimization. SIAM Journal on Optimization, ():70 77, 01. [11] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 004. [1] Y. Nesterov. Excessive gap technique in nonsmooth convex optimization. SIAM Journal of Optimization, 16(1):35 49, 005. [13] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):17 15, 005. [14] Y. Nesterov. Smoothing technique and its applications in semidefinite optimization. Mathematical Programming, 110():45 59,
Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization
Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we
More informationOn the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems
On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the
More informationA Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators
A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note
More informationOn Gap Functions for Equilibrium Problems via Fenchel Duality
On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium
More informationAlmost Convex Functions: Conjugacy and Duality
Almost Convex Functions: Conjugacy and Duality Radu Ioan Boţ 1, Sorin-Mihai Grad 2, and Gert Wanka 3 1 Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany radu.bot@mathematik.tu-chemnitz.de
More informationOn the relations between different duals assigned to composed optimization problems
manuscript No. will be inserted by the editor) On the relations between different duals assigned to composed optimization problems Gert Wanka 1, Radu Ioan Boţ 2, Emese Vargyas 3 1 Faculty of Mathematics,
More informationA New Fenchel Dual Problem in Vector Optimization
A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual
More informationRobust Duality in Parametric Convex Optimization
Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise
More informationThe Subdifferential of Convex Deviation Measures and Risk Functions
The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions
More informationOn the Brézis - Haraux - type approximation in nonreflexive Banach spaces
On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationBASICS OF CONVEX ANALYSIS
BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,
More informationDuality for almost convex optimization problems via the perturbation approach
Duality for almost convex optimization problems via the perturbation approach Radu Ioan Boţ Gábor Kassay Gert Wanka Abstract. We deal with duality for almost convex finite dimensional optimization problems
More informationAccelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)
Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan
More informationLocal strong convexity and local Lipschitz continuity of the gradient of convex functions
Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate
More informationConvex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE
Convex Analysis Notes Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE These are notes from ORIE 6328, Convex Analysis, as taught by Prof. Adrian Lewis at Cornell University in the
More informationSTABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES
STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )
More informationOptimality Conditions for Nonsmooth Convex Optimization
Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never
More informationMaster 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique
Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some
More informationConvex Optimization Notes
Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =
More informationarxiv: v1 [math.oc] 12 Mar 2013
On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February
More informationVictoria Martín-Márquez
A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationSTRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIV, Number 3, September 2009 STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES Abstract. In relation to the vector
More information6. Proximal gradient method
L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationBrézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping
Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Radu Ioan Boţ, Sorin-Mihai Grad and Gert Wanka Faculty of Mathematics Chemnitz University of Technology
More informationInertial forward-backward methods for solving vector optimization problems
Inertial forward-backward methods for solving vector optimization problems Radu Ioan Boţ Sorin-Mihai Grad Dedicated to Johannes Jahn on the occasion of his 65th birthday Abstract. We propose two forward-backward
More informationA Dual Condition for the Convex Subdifferential Sum Formula with Applications
Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ
More informationA Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions
A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework
More informationDedicated to Michel Théra in honor of his 70th birthday
VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of
More informationIterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem
Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationA Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions
A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationConvex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)
ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationAn inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions
An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward
More informationInertial Douglas-Rachford splitting for monotone inclusion problems
Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm
More informationConvex Analysis and Economic Theory AY Elementary properties of convex functions
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally
More informationSolving monotone inclusions involving parallel sums of linearly composed maximally monotone operators
Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationPARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT
Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More informationAn inertial forward-backward method for solving vector optimization problems
An inertial forward-backward method for solving vector optimization problems Sorin-Mihai Grad Chemnitz University of Technology www.tu-chemnitz.de/ gsor research supported by the DFG project GR 3367/4-1
More informationSHORT COMMUNICATION. Communicated by Igor Konnov
On Some Erroneous Statements in the Paper Optimality Conditions for Extended Ky Fan Inequality with Cone and Affine Constraints and Their Applications by A. Capătă SHORT COMMUNICATION R.I. Boţ 1 and E.R.
More informationSecond order forward-backward dynamical systems for monotone inclusion problems
Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from
More informationTHE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION
THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationConvex Functions. Pontus Giselsson
Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationSequential Pareto Subdifferential Sum Rule And Sequential Effi ciency
Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency
More informationOn duality theory of conic linear problems
On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu
More informationMonotone operators and bigger conjugate functions
Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008
More informationSubgradient Projectors: Extensions, Theory, and Characterizations
Subgradient Projectors: Extensions, Theory, and Characterizations Heinz H. Bauschke, Caifang Wang, Xianfu Wang, and Jia Xu April 13, 2017 Abstract Subgradient projectors play an important role in optimization
More informationBrøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane
Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL
More informationStrongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers
University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part B Faculty of Engineering and Information Sciences 206 Strongly convex functions, Moreau envelopes
More informationarxiv: v2 [math.fa] 21 Jul 2013
Applications of Convex Analysis within Mathematics Francisco J. Aragón Artacho, Jonathan M. Borwein, Victoria Martín-Márquez, and Liangjin Yao arxiv:1302.1978v2 [math.fa] 21 Jul 2013 July 19, 2013 Abstract
More informationADMM for monotone operators: convergence analysis and rates
ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated
More information1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method
L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order
More informationSelf-dual Smooth Approximations of Convex Functions via the Proximal Average
Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven
More informationA Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization
A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.
More informationEE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1
EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex
More informationA Unified Approach to Proximal Algorithms using Bregman Distance
A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department
More informationNumerical methods for approximating. zeros of operators. and for solving variational inequalities. with applications
Faculty of Mathematics and Computer Science Babeş-Bolyai University Erika Nagy Numerical methods for approximating zeros of operators and for solving variational inequalities with applications Doctoral
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More informationThe proximal mapping
The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function
More informationMaximal Monotone Inclusions and Fitzpatrick Functions
JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal
More informationChapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems
Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P
More informationMaximal monotonicity for the precomposition with a linear operator
Maximal monotonicity for the precomposition with a linear operator Radu Ioan Boţ Sorin-Mihai Grad Gert Wanka July 19, 2005 Abstract. We give the weakest constraint qualification known to us that assures
More informationProximal methods. S. Villa. October 7, 2014
Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem
More informationFast proximal gradient methods
L. Vandenberghe EE236C (Spring 2013-14) Fast proximal gradient methods fast proximal gradient method (FISTA) FISTA with line search FISTA as descent method Nesterov s second method 1 Fast (proximal) gradient
More informationZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT
ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of
More information3.1 Convexity notions for functions and basic properties
3.1 Convexity notions for functions and basic properties We start the chapter with the basic definition of a convex function. Definition 3.1.1 (Convex function) A function f : E R is said to be convex
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationLECTURE 13 LECTURE OUTLINE
LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming
More informationWEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE
Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department
More informationHelly's Theorem and its Equivalences via Convex Analysis
Portland State University PDXScholar University Honors Theses University Honors College 2014 Helly's Theorem and its Equivalences via Convex Analysis Adam Robinson Portland State University Let us know
More informationConvex Analysis and Optimization Chapter 2 Solutions
Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationAn Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationDual and primal-dual methods
ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method
More informationThe resolvent average of monotone operators: dominant and recessive properties
The resolvent average of monotone operators: dominant and recessive properties Sedi Bartz, Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang September 30, 2015 (first revision) December 22, 2015 (second
More informationI teach myself... Hilbert spaces
I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition
More informationThe sum of two maximal monotone operator is of type FPV
CJMS. 5(1)(2016), 17-21 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 The sum of two maximal monotone operator is of type
More informationA Dykstra-like algorithm for two monotone operators
A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct
More informationLECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions
LECTURE 12 LECTURE OUTLINE Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions Reading: Section 5.4 All figures are courtesy of Athena
More informationExtended Monotropic Programming and Duality 1
March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where
More informationApproaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping
March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu
More informationSemi-infinite programming, duality, discretization and optimality conditions
Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,
More informationPreprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN
Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and
More informationSelf-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion
Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Michel Théra LACO, UMR-CNRS 6090, Université de Limoges michel.thera@unilim.fr reporting joint work with E. Ernst and M. Volle
More informationGEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect
GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In
More informationConvex analysis and profit/cost/support functions
Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two
More informationAn adaptive accelerated first-order method for convex optimization
An adaptive accelerated first-order method for convex optimization Renato D.C Monteiro Camilo Ortiz Benar F. Svaiter July 3, 22 (Revised: May 4, 24) Abstract This paper presents a new accelerated variant
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationOptimization methods
Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to
More information