SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

Size: px
Start display at page:

Download "SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION"

Transcription

1 International Journal of Pure and Applied Mathematics Volume 91 No , ISSN: (printed version); ISSN: (on-line version) url: doi: PAijpam.eu SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION Wei Wang 1, Miao Chen 2, Lingling Zhang 3 1 School of Mathematics Liaoning Normal University Liaoning, Dalian, , P.R. CHINA 2 School of Mathematics Liaoning Normal University Liaoning, Dalian, , P.R. CHINA Abstract: Nonsmooth convex optimization problem is a class of important problems in operational research. Bundle methods are considered as one of the most efficient methods for solving nonsmooth optimization problems. The methods have already been applied to many practical problems. In this paper, using bundle method, the optimization problem that the sum of maximum eigenvalue function and general non-smooth convex function can be solved. Through approximation to the objective function, the proximal bundle method based on approximate model is given. We prove that the sequences generated by the algorithm converge to the optimal solution of the original problem. Finally, the algorithm is used to solve a class of constrained maximum eigenvalue function. AMS Subject Classification: 15A18, 49J52, 52A41 Key Words: nonsmooth optimization, bundle method, maximum eigenvalue function 1. Introduction Nonsmooth optimization problems arise in many fields of applications, for example, in economics (see [1]), mechanics (see [2]), engineering (see [3]) and optimal control (see [4]). They are generally difficult to solve. The methods Received: August 28, 2013 Correspondence author c 2014 Academic Publications, Ltd. url:

2 292 W. Wang, M. Chen, L. Zhang for nonsmooth optimization can be divided into two main classes: subgradient methods and bundle methods. In this paper, we focus on bundle methods, specifically on their approximate model. Consider a class of problem as follows: (P) min y R mλ max(a(y))+g(y), where λ max (A(y)) is a maximum eigenvalue function, A is a linear operator from R m to S n, and g(y) is a nonsmooth convex function. C. Helmberg and F. Oustry (see [5]) used bundle method to solve a class of unconstrained maximum eigenvalue function. C. Sagastizabal and M. Solodov (see [6]) adopted bundlefilter method to deal with nonsmooth convex constrained optimization. Here, the optimization problem that a class of the sum of maximum eigenvalue function and general non-smooth convex function can be solved by using proximal bundle method. Furthermore, we extend the algorithm to a class of constrained maximum eigenvalue function. Inordertoobtainaminimizerof(P), weshouldsolvethefollowingsubproblems. We consider a class of proximal bundle method for (P). The sequence of stability centers {x k } is a subsequence of {y k }, where {y k } is a sequence of sample points which is used to define an approximate model of the objective function F(y) := λ max (A(y))+g(y). In order to generate candidate point y k, we should construct an approximate model ˆFk (y) of the objective function F(y). The structure of this paper is as follows: Section 2 shows the approximate model of the objective function, which is under the condition of ri(domλ max (A(x i ))) ri(domg(x i )) φ. The proximal bundle method is derived in Section 3 and its convergence is studied in Section 4. In Section 5, a class of constrained maximum eigenvalue function optimization problem is solved by proximal bundle method. Throughout the paper, and, are the standard norm and inner product in Hilbert space. 2. The Approximate Model of the Objective Function In this section, we will mainly study the approximate model of the objective function. Then, we introduce the subdifferential of the maximum eigenvalue function: λ max (X), X S n.

3 SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF For the convenience of calculation, we will deal with the maximum eigenvalue function. Convexity is the important property enjoyed by maximum eigenvalue function. It is the support function of the compact convex set C n := {V S n : V 0,trV = 1} (see [7]), Then λ max (X) = max v R n, v =1v T Xv = max V Cn V X, where is the standard scalar product in S n. λ max (X) is the face of C n exposed by X. Let r be the multiplicity of λ max (X) and let Q be an n r matrix whose columns form an orthonormal basis of the corresponding eigenspace. Then λ max (X) = {QZQ T : Z C r }. we consider the maximum eigenvalue function form as λ max (A(x)), where A is a linear operator from R m to S n : A(x) = A 0 +Ax, Then It follows from this that [λ max (A(x))] = A λ max (A(x)) = A {Q(A(x))ZQ(A(x)) T,Z C r }. λ max (A(x i )) = {A (Q(A(x i ))ZQ(A(x i )) T ),Z C r }. Suppose ri(domλ max (A(y))) ri(domg(y)) φ. Then we construct the following approximate model for F(y) ˆF k (y) := max,2,k {F(xi )+ m i,y x i }, where m i F(x i ) = [λ max (A(x i ))+g(x i )]. Choose m i = A (Q(A(x i ))ZQ(A(x i )) T )+s i, where A (Q(A(x i ))ZQ(A(x i )) T ) λ max (A(x i )),s i g(x i ). By the condition of ri(domλ max (A(y))) ri(domg(y)) φ, we have the result from the convex analysis in [8] that A (Q(A(x i ))ZQ(A(x i )) T )+s i λ max (A(x i ))+ g(x i ) = [λ max (A(x i ))+g(x i )]. Set the terms e i be the linearization errors at x k, e i := F(x k ) F(x i ) A (Q(A(x i ))ZQ(A(x i )) T )+s i,x k x i. With the notation, the approximate models have the form ˆF k (y) = F(x k )+ max,2,k { ei + A (Q(A(x i ))ZQ(A(x i )) T )+s i,y x k }. In next section, the proximal bundle method algorithm based on approximate model will be given.

4 294 W. Wang, M. Chen, L. Zhang 3. The Proximal Bundle Method Algorithm Algorithm (the Proximal Bundle Method) Step 0. Let ε 0, m (0,1) be given parameters. Choose x 1, call black box with y = x 1, we can obtain F(x 1 ) and A (Q(A(x 1 ))ZQ(A(x 1 )) T )+s 1 (λ max (A(x 1 ))+g(x 1 )), construct the model ˆF1, and let k = 1, δ 1 =. Step 1. If δ k ε, stop. Step 2. Solve the quadratic program as follows: (P 1 ) min y R n ˆFk (y)+ 1 2 η k y x k 2, the nominal decrease δ k+1 := F(x k ) ˆF k (y k+1 ) 1 2 η k y k+1 x k 2. Step 3. Call black box with y = y k+1, if F(x k ) F(y k+1 ) mδ k+1, let x k+1 = y k+1, we call it serious-step. Otherwise, let x k+1 = x k, we call it null-step. Step 4. Append y k+1 to bundle model, construct ˆF k+1. Change k = k+1, go to Step 1. At Step 2, we can get the candidate point y k+1 by the dual problem of (P 1 ). This can be guaranteed by the following theorem. Theorem 1. If y k+1 be the unique solution to (P 1 ) and assume η k > 0. Then y k+1 = x k 1 np k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ), η k where α = (α 1,α 2,,α npk ) is a solution to min 1 2η k α i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2 + α i e i (D 1 ) s.t. α k = {α i [0,1], α i = 1,i = 1,2,np k } In addition, the following relations hold: (1) ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) ˆF k (y k+1 );

5 SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF (2) δ k+1 = ε k + 1 2η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2, where ε k = ᾱ i e i ; (3) ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) εk F(x k ). Proof. Write (P 1 ) as aqp withan extra scalar variabler. (P 1 ) is equivalent to min r + 1 (y,r) R m 2 η k y x k 2 R (P 2 ) s.t. F(x k ) e i + A (Q(A(x i ))ZQ(A(x i )) T )+s i,y x k r i = 1,2,,np k. In view of strong convexity, the dual problem of (P 1 ) is equivalent to the following problem: min 1 2η k α i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2 + α i e i (D 1 ) s.t. α k = {α i [0,1], α i = 1,i = 1,2,np k } and y k+1 = x k 1 np k η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T ) + s i ) is the solution of (P 1 ), then (1) holds. Because there is no duality gap, the optimal valve in (P 1 ) is equal to the dual optimal value in (P 1 ), hence (2) holds. The relation F(y) ˆF k (y) ˆF k (y k+1 ) gives the desired result of (3). As iterations go along, the number of elements in the bundle increases. When the size of the bundle becomes too big, it is necessary to compress it and clean the model. Let np max be the maximal size of the bundle, and np k be its current size. The compression sub-algorithm to be appended at Step 4 is the following: Step 4. Let n a = {i np k,ᾱ i > 0} be the cardinality of active indices. Ifn a np max 1,thendeleteallinactivecouplesfromthebundle,setn left = n a, anddefine np k+1 = n left +1. Otherwise, delete all inactive couples from the bundle, and discard two or more couples (A (Q(A(x i ))ZQ(A(x i )) T ) + s i,e i ),

6 296 W. Wang, M. Chen, L. Zhang then compress the discarded couples into a single couple. n left = np max 2 or n left < np max 2, define np k+1 = n left +2. If n a np max 1, append (A (Q(A(x np k+1 ))ZQ(A(x np k+1 )) T )+s np k+1,e npk+1 ) to the bundle, with 0, if serious-step, e npk+1 = F(x k ) F(y k+1 ) A (Q(A(x np k+1))zq(a(x np k+1)) T ) +s np k+1,x k y k+1, if null-step. Construct ˆF k+1, let k = k+1, go to Step 1. Remark 1. Whenthealgorithmreaches aniteration wherethenumbernp k becomes too big, delete all inactive couples from the bundle. If the remaining couples are still too many, then synthesizes indispensable information of active bundleelements. Usingtheinformationof( np k ᾱi(a (Q(A(x i ))ZQ(A(x i )) T )+ s i ),ε k ), which is defined by Theorem 4, construct aggregate linearization np k F α (y) := F(x k ) ε k + ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y x k. For F α (y), it holds that: (1) F α (y) = ˆF k (y k+1 )+ ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y y k+1, (2) For all y R n, ˆF k (y) F α (y). Remark 2. When the maximum capacity is reached, for instance, when k = np max, suppose we decide to discard the elements x 1,x 2,,x t (t < k) from the bundle, and to append the aggregate couple. The resulting model will be Then ˆF k+1 (y) = max{ max t+1 i k+1 {F(xi )+ A (Q(A(x i ))ZQ(A(x i )) T ) +s i,y y i },F α (y)}, F α (y) ˆF k+1 (y) F(y); ˆF k+1 (y) F(x k+1 )+ A (Q(A(x k+1 ))ZQ(A(x k+1 )) T )+s k+1,y y k+1.

7 SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF Convergence Analysis Next, we discuss the convergence result in two cases: (1) ε > 0;(2) ε = 0. Case (1): When ε > 0, by the following Theorem 5 there is an index k last for which δ klast ε if (P) has minimizers. Therefore, x k last is the minimizer. Theorem 2. Consider the algorithm and suppose it loops forever. Use the notation F := lim k Ks F(x k ) and F > ( K s is the set of indices k for which a new serious-step is done). Then 0 δ k F 1 F +ε m k K s Proof. Note first that, since ε 0, for the algorithm to loop forever the nominal decease must satisfy δ k > 0 for all k K s. Since the descent test is satisfied: x k+1 = y k+1, then F(x k ) F(x k+1 ) 0. Let k be the index followingkink s. Between k andk thealgorithmmakesnull-stepsonly:x k+1 = x k+j, for all j = 2,3,,k k. The descent test at k gives F(x k+1 ) F(x k +1 ) mδ k +1. Hence, for any k K s, ε > 0, m k k K s δ k+1 k k K s F(x k ) F(x k+1 ) = F x 1 F x k +1 F x 1 F +ε, Now letting k gives the desired result. Case(2) : Whenε = 0, ifδ k+1 = 0, bytheresultoftheorem4, thealgorithm will find a solution x k to (P). if δ k+1 > 0, the algorithm loops indefinitely. In this case, there are two possibilities for the sequence of descent steps {x k } k Ks. Either it has infinitely many elements, or there is an iteration k last where a last serious-step is done, i.e., x k = x k last for all k k last. We consider these two situations separately. Theorem 3. Suppose the algorithm generates infinitely many descentsteps x k. Then either (P) has an empty solution set and {F(x k )}, or (P) has minimizers, In this case, the following holds: (1) Both {δ k } 0 and {ε k } 0 as k K s,k. (2) If for all k K s, 0 < η k+1 η k, then the sequence {x k } is bounded and converges to a minimizer of (P). Proof. Note first that, since ε = 0 and the algorithm does not stop. We have that δ k+1 > 0 for all k K s. If (P) has no solution, {F(x k )} goes to.

8 298 W. Wang, M. Chen, L. Zhang To see item (2), first, we shall show that the sequence x k is minimizing for (P). Since 0 < η k+1 η k and y k+1 = x k+1, we can obtain x k+1 x,η k+1 (x k+1 x) x k+1 x,η k (x k+1 x), we notice from that x k+1 = x k 1 np k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) (1) η k η k x k+1 x,x k+1 x = η k x k x η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2 2 x k x, ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) We bound the right hand side terms by using Theorem 4(2) and 4(3). Then we obtain the relation η k+1 x k+1 x,x k+1 x η k x k x,x k x +2(F(x) F(x k )+δ k+1 ). (2) It follows from this that the sequence x k is minimizing for (P). To see the sequence {x k } is bounded, we suppose x is a solution to (P). Take in (2) x = x, and sum over k K s, we can obtain the desire result. Theorem 4. Suppose the algorithm generates a last serious-step x k last, followed by infinitely many null-steps, if 0 < η k+1 η k, then the sequence {y k } converges to x k last and x k last is the minimizer of F(x). Proof. For any y R m, consider the function M k (y) = ˆF k (y k+1 )+ 1 2 η k y x k last η k y k+1 y 2. Since y k+1 is the solution of (P 1 ), So Furthermore, the equality in M k (y k+1 ) F(x k last ), k k last. (3) ˆF k+1 (y) = max{ max t+1 i k+1 {F(xi )+ A (Q(A(x i ))ZQ(A(x i )) T ) +s i,y y i },F α (y)},

9 SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF and the identity about F α (y), give the relations ˆF k+1 (y) ˆF np k k (y k+1 )+ ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y y k+1. (4) Using inequality (4) written for y = y k+2, and we obtain that M k+1 (y k+2 ) M k (y k+1 ) 1 2 η k y k+1 x k last η k y k+2 x k last 2 + ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y k+2 y k+1. By expanding the difference of squares we see that it follows from this that y k+2 x k last 2 y k+1 x k last 2 = y k+2 y k y k+2 y k+1,y k+1 x k last, M k+1 (y k+2 ) M k (y k+1 )+ 1 2 η k y k+2 y k+1 2. (5) Since the increasing sequence M k (y k+1 ) is bounded from above by (3), it must converge. We now show that the sequence {y k+1 } is bounded, with {y k+1 y k } 0. Using the identity np k η k (x k last y k+1 ) = ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) and the relation M k (y k+1 ) = ˆF k (y k+1 )+ 1 2 η k y k+1 x k last 2, we see that M k (y k+1 )+ 1 2 η k y k+1 x k last 2 = ˆF k (y k+1 )+η k y k+1 x k last 2 = F α (x k last ) F(x k last ). It follows from this that the sequence {y k+1 } must be bounded. In addition, by (5) and passing to the limit, we conclude that {y k+1 y k } 0. It is obvious from the definition of convexity that ˆF k (y k+1 ) F(y k ) 0, (6)

10 300 W. Wang, M. Chen, L. Zhang From the bounded sequence {y k } extract a subsequence {y k i }, where {y k i } ȳ as i. Since {y k+1 y k } 0, the sequence {y k i+1 } ȳ. Therefore F(y k i+1 ) ˆF ki (y k i+1 ) = F(y k i+1 ) F(y k i )+F(y k i ) ˆF ki (y k i+1 ) 0 as i this implies that ˆF ki (y k i+1 ) F(ȳ) as i. To show that x k last minimizes (P), recall that for all k > k last, the serious-step is never satisfied. This means that F(y k i+1 ) F(x k last) mδ ki +1, so, we obtain 0 (1 m)δ ki +1 F(y k i+1 ) ˆF ki (y k i+1 ). Passing to the limit as i and using (6) we conclude that δ ki Hence, it is obvious from the Theorem 4(2),(3) that x k last minimizes (P). Finally, weshowthatȳ isequaltox k last. Usethefacts thatf(y) F(x k last) and that ˆF ki (y) F(y), then we have F(ȳ) ˆF ki (y k i+1 )+ 1 2 η k last y k i+1 x k last 2. By (6), we obtain in the limit that F(ȳ) lim i (ˆF ki (y k i+1 )+ 1 2 η k last y k i+1 x k last 2 ) = F(ȳ)+ 1 2 η k last ȳ x k last 2 ), an inequality that is possible only if ȳ = x k last, and the proof is complete. 5. Bundle Method for Constrained Maximum Eigenvalue Function Consider the problem as follows : minλ max (A(y)) ( P) s.t. By = c y 0, where λ max (A(y)) is the maximum eigenvalue function, A is a linear operator from R m to S n, B R m m, Then ( P) is equivalent to ( P 1 ) min y R mλ max(a(y))+δ Ω (y),

11 SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF where δ Ω (y) denotes the indicator function on the set Ω = {y R m + : By = c}. δ Ω (y) is a nonsmooth convex function, which satisfies the condition of the proximal bundle method. Therefore, we can apply the proximal bundle method to P. Let G(y) = λ max (A(y)) + δ Ω (y), the sub-problem that the approximate model of G(y) as follows: where Ĝ(y) = G(x k )+ max,2 k { ei + A (Q(A(x i ))ZQ(A(x i )) T )+c i,y x k }. A (Q(A(x i ))ZQ(A(x i )) T ) λ max (A(x i )),c i δ Ω (x i ) = N Ω (x i ). Suppose ri(domλ max (A(y))) ri(domδ Ω (y)) φ, then Ĝ(y) = G(x k )+ max,2,k { ei + A (Q(A(x i ))ZQ(A(x i )) T )+c i,y x k }, which let the terms e i are the linearization errors at x k, e i := G(x k ) G(x i ) A (Q(A(x i ))ZQ(A(x i )) T )+c i,x k x i. Note that N Ω (x i ) = {v = v 1 +v 2 : v 1 R m,v 2 B T y,y R m }, Indeed, as we shall show next: Ω = {y R m + : By = c} = {y R m +} {y R m : Bx c = 0}. Let Ω 1 = {y R m + }, Ω 2 = {y R m : By c = 0}, we have N Ω (x i ) = N Ω1 Ω 2 (x i ) = N Ω1 (x i )+N Ω2 (x i ) = {v R m }+{v = BT y,y R m }. Just as in the previous section, we can solve the problem ( P 1 ). Theorem 5. If y k+1 be the unique solution to ( P 1 ) and assume η k > 0. Then y k+1 = x k 1 np k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ), η k where α = (α 1,α 2,,α npk ) is a solution to min 1 2η k α i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) 2 + α i e i (D 1 ) α k = {α i [0,1], α i = 1,i = 1,2,np k }. In addition, the following relations hold:

12 302 W. Wang, M. Chen, L. Zhang (1) ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) Ĝk(y k+1 ). (2) δ k+1 = ε k + 1 2η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) 2, (3) where we defined ε k = ᾱ i e i. ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) εk G(x k ). Then, by algorithm it is easy to obtain the desired result. References [1] J. Outrata, M. Kočvara and J. Zowe, Nonsmooth Approach to Optimization Problems With Equilibrium Constraints, Theory, Applications and Numerical Results, Kluwer Academic Publishers, Dordrecht (1998). [2] J.J. Moreau, P.D. Panagiotopoulos and G. Strang, Eds., Topics in Nonsmooth Mechanics, Birkhäuser Verlag, Basel (1988). [3] E.S. Mistakidis and G.E. Stavroulakis, Nonconvex Optimization in Mechanics, Smooth and Nonsmooth Algorithms, Heuristics and Engineering Applications by the F.E.M., Kluwer Academic Publisher, Dordrecht(1998). [4] F.H. Clarke, Yu. S. Ledyaev, R.J. Stern and P.R. Wolenski, Nonsmooth Analysis and Control Theory, Springer, New York(1998). [5] C. Helmberg, F. Oustry, Bundle methods to minimize the maximum eigenvalue function, Handbook of Semidefinite Programming, 27 (2000), [6] C. Sagastizábal, M. Solodov, An infeasible bundle method for nonsmooth convex constrained optimization without a penalty function or filter, SIAM J. Optimization, 16 (2005), [7] C. Lemaréchal, F. Oustry, Nonsmooth algorithms to solve semidefinite programs, Society for Industrial and Applied Mathematics, Philadelphia (2000), [8] R.T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, New Jersey(1970).

13 SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF

14 304

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

A quasisecant method for minimizing nonsmooth functions

A quasisecant method for minimizing nonsmooth functions A quasisecant method for minimizing nonsmooth functions Adil M. Bagirov and Asef Nazari Ganjehlou Centre for Informatics and Applied Optimization, School of Information Technology and Mathematical Sciences,

More information

Nonsmooth optimization : beyond first order methods. A tutorial focusing on bundle methods

Nonsmooth optimization : beyond first order methods. A tutorial focusing on bundle methods Nonsmooth optimization : beyond first order methods. A tutorial focusing on bundle methods Claudia Sagastizábal (IMECC-UniCamp, Campinas Brazil, adjunct researcher) SESO 2018, Paris, May 23 and 25, 2018

More information

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Lehrstuhl B für Mechanik Technische Universität München D Garching Germany

Lehrstuhl B für Mechanik Technische Universität München D Garching Germany DISPLACEMENT POTENTIALS IN NON-SMOOTH DYNAMICS CH. GLOCKER Lehrstuhl B für Mechanik Technische Universität München D-85747 Garching Germany Abstract. The paper treats the evaluation of the accelerations

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS

LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS Napsu Karmitsa 1 Marko M. Mäkelä 2 Department of Mathematics, University of Turku, FI-20014 Turku,

More information

A doubly stabilized bundle method for nonsmooth convex optimization

A doubly stabilized bundle method for nonsmooth convex optimization Mathematical Programming manuscript No. (will be inserted by the editor) A doubly stabilized bundle method for nonsmooth convex optimization Welington de Oliveira Mikhail Solodov Received: date / Accepted:

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS Yazheng Dang School of Management

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Nondifferentiable Higher Order Symmetric Duality under Invexity/Generalized Invexity

Nondifferentiable Higher Order Symmetric Duality under Invexity/Generalized Invexity Filomat 28:8 (2014), 1661 1674 DOI 10.2298/FIL1408661G Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Nondifferentiable Higher

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties

One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties Fedor S. Stonyakin 1 and Alexander A. Titov 1 V. I. Vernadsky Crimean Federal University, Simferopol,

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials Int. Journal of Math. Analysis, Vol. 7, 2013, no. 18, 891-898 HIKARI Ltd, www.m-hikari.com On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials Jaddar Abdessamad Mohamed

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Composite nonlinear models at scale

Composite nonlinear models at scale Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

FROM VARIATIONAL TO HEMIVARIATIONAL INEQUALITIES

FROM VARIATIONAL TO HEMIVARIATIONAL INEQUALITIES An. Şt. Univ. Ovidius Constanţa Vol. 12(2), 2004, 41 50 FROM VARIATIONAL TO HEMIVARIATIONAL INEQUALITIES Panait Anghel and Florenta Scurla To Professor Dan Pascali, at his 70 s anniversary Abstract A general

More information

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs Fanwen Meng,2 ; Gongyun Zhao Department of Mathematics National University of Singapore 2 Science

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs) ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality

More information

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms Convex Optimization Theory Athena Scientific, 2009 by Dimitri P. Bertsekas Massachusetts Institute of Technology Supplementary Chapter 6 on Convex Optimization Algorithms This chapter aims to supplement

More information

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005 Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION Mikhail Solodov September 12, 2005 ABSTRACT We consider the problem of minimizing a smooth

More information

LIMIT LOAD OF A MASONRY ARCH BRIDGE BASED ON FINITE ELEMENT FRICTIONAL CONTACT ANALYSIS

LIMIT LOAD OF A MASONRY ARCH BRIDGE BASED ON FINITE ELEMENT FRICTIONAL CONTACT ANALYSIS 5 th GRACM International Congress on Computational Mechanics Limassol, 29 June 1 July, 2005 LIMIT LOAD OF A MASONRY ARCH BRIDGE BASED ON FINITE ELEMENT FRICTIONAL CONTACT ANALYSIS G.A. Drosopoulos I, G.E.

More information

QUADRATIC MAJORIZATION 1. INTRODUCTION

QUADRATIC MAJORIZATION 1. INTRODUCTION QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et

More information

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality

More information

Gradient methods for minimizing composite functions

Gradient methods for minimizing composite functions Gradient methods for minimizing composite functions Yu. Nesterov May 00 Abstract In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

A smoothing augmented Lagrangian method for solving simple bilevel programs

A smoothing augmented Lagrangian method for solving simple bilevel programs A smoothing augmented Lagrangian method for solving simple bilevel programs Mengwei Xu and Jane J. Ye Dedicated to Masao Fukushima in honor of his 65th birthday Abstract. In this paper, we design a numerical

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled

More information

Fixed points in the family of convex representations of a maximal monotone operator

Fixed points in the family of convex representations of a maximal monotone operator arxiv:0802.1347v2 [math.fa] 12 Feb 2008 Fixed points in the family of convex representations of a maximal monotone operator published on: Proc. Amer. Math. Soc. 131 (2003) 3851 3859. B. F. Svaiter IMPA

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions

Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions J Optim Theory Appl (2008) 138: 329 339 DOI 10.1007/s10957-008-9382-6 Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions M.R. Bai N. Hadjisavvas Published online: 12 April 2008 Springer

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD OGANEDITSE A. BOIKANYO AND GHEORGHE MOROŞANU Abstract. This paper deals with the generalized regularization proximal point method which was

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Lecture 6: September 17

Lecture 6: September 17 10-725/36-725: Convex Optimization Fall 2015 Lecturer: Ryan Tibshirani Lecture 6: September 17 Scribes: Scribes: Wenjun Wang, Satwik Kottur, Zhiding Yu Note: LaTeX template courtesy of UC Berkeley EECS

More information

A bundle-filter method for nonsmooth convex constrained optimization

A bundle-filter method for nonsmooth convex constrained optimization Math. Program., Ser. B (2009) 116:297 320 DOI 10.1007/s10107-007-0-7 FULL LENGTH PAPER A bundle-filter method for nonsmooth convex constrained optimization Elizabeth Karas Ademir Ribeiro Claudia Sagastizábal

More information

Quadratic Optimization over a Polyhedral Set

Quadratic Optimization over a Polyhedral Set International Mathematical Forum, Vol. 9, 2014, no. 13, 621-629 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2014.4234 Quadratic Optimization over a Polyhedral Set T. Bayartugs, Ch. Battuvshin

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY

NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY J. Korean Math. Soc. 48 (011), No. 1, pp. 13 1 DOI 10.4134/JKMS.011.48.1.013 NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY Tilak Raj Gulati and Shiv Kumar Gupta Abstract. In this

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Optimality conditions for unconstrained optimization. Outline

Optimality conditions for unconstrained optimization. Outline Optimality conditions for unconstrained optimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University September 13, 2018 Outline 1 The problem and definitions

More information

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information

Math 273a: Optimization Convex Conjugacy

Math 273a: Optimization Convex Conjugacy Math 273a: Optimization Convex Conjugacy Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Convex conjugate (the Legendre transform) Let f be a closed proper

More information

LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION

LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION ROMAN A. POLYAK Abstract. We introduce the Lagrangian Transformation(LT) and develop a general LT method for convex optimization problems. A class Ψ of

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Using Duality as a Method to Solve SVM Regression. Problems. Langley DeWitt

Using Duality as a Method to Solve SVM Regression. Problems. Langley DeWitt Using Duality as a Method to Solve SVM Regression 1. Introduction. Reproducing Kernel Hilbert Space 3. SVM Definition 4. Measuring the Quality of an SVM 5. Representor Theorem Problems Langley DeWitt 6.

More information

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic

More information

Gradient methods for minimizing composite functions

Gradient methods for minimizing composite functions Math. Program., Ser. B 2013) 140:125 161 DOI 10.1007/s10107-012-0629-5 FULL LENGTH PAPER Gradient methods for minimizing composite functions Yu. Nesterov Received: 10 June 2010 / Accepted: 29 December

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information