On the acceleration of augmented Lagrangian method for linearly constrained optimization
|
|
- Reynold Morton
- 5 years ago
- Views:
Transcription
1 On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental role in algorithmic development of constrained optimization. In this paper, we mainly show that Nesterov s influential acceleration techniques can be applied to accelerate ALM, thus yielding an accelerated ALM whose iteration-complexity is O(/ 2 for linearly constrained convex programming. As a by-product, we also show easily that the convergence rate of the original ALM is O(/. Keywords. Convex programming, augmented Lagrangian method, acceleration. Introduction The classical augmented Lagrangian method (ALM, or well-nown as the method of multipliers, has been playing a fundamental role in the algorithmic development of constrained optimization ever since its presence in [2] and [9]. The existing literature about ALM is too many to be listed, and we only refer to [, 8] for its comprehensive study. In this paper, we restrict our discussion into the convex minimization with linear equation constrains: (P min {f(x Ax = b, x X }, (. where f(x : R n R is a differentiable convex function, A R m n and b R m and X is a convex closed set in R n. Throughout we assume that the solution set of (. denoted by X is not empty. Note that the Lagrange function of the problem (. is L(x, λ = f(x λ T (Ax b, (.2 where λ R m is the Lagrange multiplier. Then, the dual problem of (. is (D max x X,λ R m L(x, λ s. t (x x T x L(x, λ, x X. (.3 We denote the solution set of (.3 by X Λ. As analyzed in [], the ALM merges the penalty idea with the primal-dual and Lagrangian philosophy, and each of its iteration consists of the tas of minimizing the augmented Lagrangian function of (. and the tas of updating the Lagrange multiplier. More specifically, starting with λ R m, the -th iteration of ALM for (. is { x + = Argmin {f(x (λ T (Ax b + β 2 Ax b 2 x X }, λ + = λ β(ax + (.4 b, Department of Mathematics and National Key Laboratory for Novel Software Technology, Naning University, Naning, 293, China. This author was supported by the NSFC Grant 9795 and the NSF of Jiangsu Province Grant BK hebma@nu.edu.cn Department of Mathematics, Hong Kong Baptist University, Hong Kong, China. This author was supported in part by HKRGC xmyuan@hbu.edu.h
2 where β > is the penalty parameter for the violation of the linear constraints. We refer to [] for the relevance of ALM with the classical proximal point algorithm, which was originally proposed in [4] and concretely developed in []. Note that among significant differences of ALM from penalty methods is that the penalty parameter β can be fixed and it is not necessary to be forced to infinity, see e.g. [8]. In this paper, we use a symmetric positive definite matrix H to denote the penalty parameter, indicating the eligibility of adusting values of this parameter dynamically even though the specific strategy of this adustment will not be addressed. More specifically, let {H } be a given series of m m symmetric positive definite matrices and it satisfy H H +, >. Then, the -th iteration of ALM with matrix penalty parameter for (. can be written as { x + = Argmin {f(x (λ T (Ax b + 2 Ax b 2 H x X }, (.5 λ + = λ H (Ax + b. Inspirited by the attractive analysis of iteration-complexity for some gradient methods initiated mainly by Nesterov (see e.g. [5, 6, 7], in this paper, we are interested in analyzing the iteration-complexity of the ALM and discussing the possibility of accelerating ALM with Nesterov s acceleration schemes. More specifically, in Section 2 we shall first show that the iteration-complexity of the ALM is O(/ in terms of the obective residual of the associated Lagrange function of (.. Then, in Section 3, with the acceleration scheme in [6], we propose an accelerated ALM whose iteration-complexity is O(/ 2. Finally, some conclusions are given in Section 4. 2 The complexity of ALM In this section, we mainly show that the iteration-complexity of the classical ALM is O(/ in terms of the obective residual of the associated Lagrange function L(x, λ defined in (.2. Before that, we need to ustify the rationale of estimating the convergence rate of ALM in terms of the obective residual of L(x, λ and prove some properties of the sequence generated by ALM which are critical for complexity analysis to be addressed later. According to (.3, a pair (x, λ X R m is dual feasible if and only if (x x T ( f(x A T λ, x X. (2. Note that the minimization tas regarding x + in the ALM scheme (.5 is characterized by the following variational inequality: (x x + T { f(x + A T λ + A T H (Ax + b }, x X. Therefore, substituting the λ + -related equation in (.5 into the above variational inequality, we have (x x + T { f(x + A T λ +}, x X. (2.2 In other words, the pair (x +, λ + generated by the -th iteration of ALM is feasible to the dual problem (.3. On the other hand, a solution (x, λ X Λ of (.3 is also feasible. We thus have that the sequence {L(x, λ L(x +, λ + } is non-negative. This explains the rationale of estimating the convergence rate of ALM in terms of the obective residual of L(x, λ. Now, we present some properties of the sequence generated by the ALM in the following lemmas. Despite that their proofs are elementary, these lemmas are critical for deriving the main results of iteration-complexity later. Lemma 2.. For given λ, let (x +, λ + be generated by the ALM (.5. feasible solution (x, λ of the dual problem (.3, we have Then, for any L(x +, λ + L(x, λ λ λ + 2 H + (λ λ T H (λ λ +. (2.3 2
3 Proof. First, using the convexity of f we obtain L(x +, λ + L(x, λ = f(x + f(x + λ T (Ax b (λ + T (Ax + b (x + x T f(x + λ T (Ax b (λ + T (Ax + b. (2.4 Since (x, λ is a feasible solution of the dual problem and x + X, set x = x + in (2., we obtain (x + x T f(x (x + x T A T λ = λ T A(x + x. Substituting the last inequality in the right-hand side of (2.4, we obtain L(x +, λ + L(x, λ λ T A(x + x + λ T (Ax b (λ + T (Ax + b The assertion of this lemma is proved. = (λ λ + T (Ax + b (using (.5 = (λ λ + T H (λ λ + = λ λ + 2 H + (λ λ T H (λ λ +. Lemma 2.2. For given λ, let λ + be generated by the ALM (.5. Then we have λ + λ 2 H λ λ 2 λ λ + 2 H H 2 ( L(x, λ L(x +, λ +, (x, λ X Λ. (2.5 Proof. Since (x, λ is dual feasible, by setting (x, λ = (x, λ in (2.3, we obtain (λ λ T H (λ λ + λ λ ( L(x, λ L(x +, λ +. H Using the above inequality and by a manipulation, we obtain λ + λ 2 H = (λ λ (λ λ + 2 H = λ λ 2 H λ λ 2 H The assertion of this lemma is proved. 2(λ λ T H λ λ + 2 H The following theorem implies the global convergence of ALM (.5. (λ λ + + λ λ + 2 H 2 ( L(x, λ L(x +, λ +. Theorem 2.3. Let (x +, λ + be generated by the ALM (.5. Then for any, we have and Moreover, if H H, we have L(x +, λ + L(x, λ + λ λ + 2, (2.6 H λ + λ 2 λ λ 2 H H + λ λ + 2 H. (2.7 Ax + b 2 H Ax b 2 H A(x x + 2 H. (2.8 Proof. The first assertion (2.6 of this theorem is derived immediately from (2.3. Since L(x +, λ + L(x, λ, it follows from (2.5 that Because H + H λ + λ 2 H λ λ 2 H λ λ + 2 H., the second assertion (2.7 follows from the last inequality directly. 3
4 Now, we start to prove the third assertion (2.8. Setting x = x in (2.2, we obtain Similarly, we have (x x + T ( f(x + A T λ +. (x + x T ( f(x A T λ. Adding the above two inequalities and using the monotonicity of f, we obtain (x x + T A T (λ λ +. By using λ + = λ H (Ax + b (and the assumption H H, the last inequality that Using the above inequality in the identity we obtain (x x + T A T H(Ax + b. Ax b 2 H = Ax + b 2 H + A(x x + 2 H + 2(Ax + b T HA(x x +, and thus the third assertion (2.8 is proved. Ax b 2 H Ax + b 2 H + A(x x + 2 H, Remar 2.4. The inequality (2.7 essentially implies the global convergence of the ALM (.5 with dynamically-adusted matrix penalty parameter. In fact, it follows from (2.7 that which instantly implies that l= λ l λ l+ 2 H l λ λ 2 H, lim λ λ + 2 H =. In the following we show that the sequence of function value {L(x, λ } converges to the optimal value L(x, λ at a rate of convergence that is no worse than O(/. Hence, the iteration-complexity of the ALM (.5 is shown to be O(/ in terms of the obective residual of the Lagrange function L(x, λ. Theorem 2.5. Let (x, λ be generated by the ALM (.5. Then, for any, we have L(x, λ L(x, λ λ λ 2 H, (x, λ X Λ. (2.9 2 Proof. Due to H + H, it follows from Lemma 2.2 that, for all, we have 2(L(x +, λ + L(x, λ λ + λ 2 λ λ 2 H + H + λ λ + 2, (x, λ X Λ. H Using the fact that L(x +, λ + L(x, λ and summing the above inequality over =,...,, we obtain ( 2 = L(x +, λ + L(x, λ λ λ 2 H λ λ 2 H + By using Lemma 2. for = ( and (x, λ = (x, λ, we get = L(x +, λ + L(x, λ λ λ + 2. H 4 λ λ + 2. (2. H
5 Multiplying the last inequality by 2 and summing over =,...,, it follows that ( 2 ( + L(x +, λ + L(x, λ L(x +, λ + 2 λ λ + 2, H = which can be simplified into ( 2 L(x, λ Adding (2. and (2., we get = 2 ( L(x, λ L(x, λ λ λ 2 H and hence it follows that L(x +, λ + = λ λ 2 H = 2 λ λ + 2. (2. H (2 + λ λ + 2, H + = The proof is complete. L(x, λ L(x, λ λ λ 2 H. 2 3 An accelerated ALM In this section, we show that the classical ALM (.5 can be accelerated by some influential acceleration techniques initialized by Nesterov in [6]. As a result, an accelerated ALM with the convergence rate O(/ 2 for solving (.3 is proposed. For the convenience of presenting the accelerated ALM, from now on we use ( x, λ, rather than (x +, λ +, to denote the iterate generated by the ALM scheme (.5. Namely, with the given λ, the new iterate generated by ALM for (. is ( x, λ : { x = Argmin {f(x (λ T (Ax b + 2 Ax b 2 H x X }, (3. λ = λ H (A x b. Accordingly, Lemmas 2. and 2.2 can be rewritten into the following lemmas. Lemma 3.. For given λ, let ( x, λ be generated by the ALM (3.. Then, for any feasible solution (x, λ of the dual problem (.3, we have L( x, λ L(x, λ λ λ 2 H + (λ λ T H (λ λ. (3.2 Lemma 3.2. For given λ, let ( x, λ be generated by the ALM (3.. Then we have λ λ 2 H λ λ 2 H λ λ 2 H 2 ( L(x, λ L( x, λ, (x, λ X Λ. (3.3 Then, the accelerated ALM for (. is as follows. An accelerated augmented Lagrangian method (AALM Step. Tae λ R m. Set λ = λ and t =. Step. Let ( x, λ be generated by the original ALM (3.. Set and t + = + + 4t 2, (3.4a 2 λ + = λ ( t + ( λ t λ ( t + ( λ λ. + t + (3.4b 5
6 We first propose some lemmas before the main result. Lemma 3.3. The sequence {t } generated by (3.4a with t = satisfies Proof. Elementary by induction. For the coming analysis, we use the notations t ( + /2,. (3.5 v := L(x, λ L( x, λ and u := t (2 λ λ λ + λ λ. (3.6 Lemma 3.4. The sequences {λ } and { λ } generated by the proposed AALM satisfy where v and u are defined in (3.6. 4t 2 v 4t 2 +v + u + 2 u 2,, (3.7 H + H + Proof. By using Lemma 3. for +, setting (x, λ = ( x, λ and (x, λ = (x, λ, we get and L( x +, λ + L( x, λ λ + λ ( λ λ + T H H + (λ+ λ +, + L( x +, λ + L(x, λ λ + λ (λ λ + T H H + (λ+ λ +, + respectively. Using the definition of v, the last two inequalities can be written as and v v + λ + λ ( λ λ + T H H + (λ+ λ +, (3.8 + v + λ + λ (λ λ + T H H + (λ+ λ +. (3.9 + To get a relation between v and v +, we multiply (3.8 by (t + and add it to (3.9: (t + v t + v + t + λ + λ ( λ + λ + T H H +( t+ λ + (t + λ λ. + Multiplying the last inequality by t + and using which yields t 2 = t 2 + t + ( and thus t+ = ( + + 4t 2 /2 as in (3.4a, t 2 v t 2 +v + t + ( λ + λ t H + ( λ + λ + T H ( + t+ λ + (t + λ λ + = ( t + ( λ + λ + T H +( t+ λ+ (t + λ λ. (3. Use the identity (b a T H + (b c = 4 (2b a c 2 H + 4 a c 2 H + (since x T y = 4 x + y 2 4 x y 2 to the right-hand side of (3. with we get a := t + λ +, b := t + λ+, c := (t + λ + λ, t 2 v t 2 +v + 4 t +(2 λ + λ + λ + λ λ 2 H + 4 t +(λ + λ + λ λ 2. H + 6
7 Using the notation of u := t (2 λ λ λ + λ λ (see (3.6, the last inequality can be written as 4t 2 v 4t 2 +v + u + 2 t H + (λ + λ + λ λ 2. (3. + H + In order to write the inequality (3. in the form (3.7, we need only to set t + (λ + λ + λ λ = t (2 λ λ λ + λ λ. From the last equality we obtain λ + = λ ( t + ( λ t λ ( t + ( λ λ. + t + This is ust the form (3.4b in the accelerated multi-step version of the ALM and the lemma is proved. Corollary 3.5. Let v and u be defined in (3.6. Then, we have Proof. Again, because H 4t 2 v 4t 2 v + u 2,. (3.2 H + H, from (3.7 we obtain 4t 2 v 4t 2 +v + u + 2 H + u 2. H Since {v } is a non-negative sequence, the last inequality implies (3.2 immediately. Now, we are ready to show the fact that the iteration-complexity of the proposed AALM is O(/ 2. Theorem 3.6. Let { λ } and {λ } be generated by the proposed AALM. Then, for any, we have L(x, λ L( x, λ λ λ 2 H ( + 2, (x, λ X Λ. (3.3 Proof. Using the definition of v in (3.6, it follows from (3.2 that L(x, λ L( x, λ = v Combining with the fact t ( + /2 (see (3.5, it yields 4t 2 v + u 2 H 4t 2. L(x, λ L( x, λ Since t =, and using the definition of u given in (3.6, we have 4t 2 v + u 2 H ( + 2. (3.4 4t 2 v = 4v = 4 ( L(x, λ L( x, λ, By using (3.3, we have u 2 H = 2 λ λ λ 2. (3.5 H 4(L(x, λ L( x, λ 2 λ λ 2 H Use the identity 2 λ λ 2 H 2 λ λ 2. (3.6 H 2 a c 2 2 b c 2 2 b a 2 = a c 2 (b a + (b c 2 to the right-hand side of (3.6 with a := λ, b := λ, c := λ, 7
8 we get 4(L(x, λ L( x, λ λ λ 2 H Consequently, it follows from (3.5 and (3.7 that 4t 2 v + u 2 H Substituting it in (3.4, the assertion is proved. λ λ 2. H 2 λ λ λ 2. (3.7 H According to Theorem 3.6, for obtaining an ε-optimal solution of (.3 (denoted by ( x, λ in the sense that L(x, λ L( x, λ ε, the number of iterations required by the proposed accelerated ALM is at most C/ ε where C = λ λ 2. H 4 Conclusions In this paper, we first show that the iteration-complexity of the classical augmented Lagrangian method (ALM is O(/ for solving linearly constrained convex programming. Then, we show that the ALM can be accelerated by applying Nesterov s acceleration techniques, and the iterationcomplexity of the yielded accelerated ALM is O(/ 2. In the future, we will investigate (a the complexity of inexact ALM where the subproblems are solved approximately subect to certain criteria, as [3]; (b the complexity of some ALM-based methods, e.g. the well-nown alternating direction method for solving separable convex programming with linear constraints. References [] D. P. Bertseas, Constrained Optimization and Lagrange Multiplier Method, Academic Press, New Yor, 982. [2] M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appli., 4 (969, pp [3] G. H. Lan and R. D. C. Monteiro, Iteration-complexity of first-order augmented Lagrangian methods for convex programming, manuscript, 29. [4] B. Martinet, Regularisation, d inéquations variationelles par approximations succesives, Rev. Francaise d Inform. Recherche Oper., 4 (97, pp [5] A. S. Nemirovsy and D. B. Yudin, Problem Complexity and Method Efficiency in Optimization, Wiley-Interscience Series in Discrete Mathematics, John Wiley & Sons, New Yor, 983. [6] Y. E. Nesterov, A method for solving the convex programming problem with convergence rate O(/ 2, Dol. Aad. Nau SSSR, 269 (983, pp [7] Y. E. Nesterov, Gradient methods for minimizing composite obective function, CORE report 27; available at [8] J. Nocedal and S. J. Wright, Numerical Optimization, Springer Verlag, 999. [9] M. J. D. Powell, A method for nonlinear constraints in minimization problems, In Optimization edited by R. Fletcher, pp , Academic Press, New Yor, 969. [] R.T. Rocafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Math. Oper. Res. (976, pp [] R.T. Rocafellar, Monotone operators and the proximal point algorithm, SIAM, J. Control Optim. 4 (976, pp
On convergence rate of the Douglas-Rachford operator splitting method
On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More informationThe Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent
The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Caihua Chen Bingsheng He Yinyu Ye Xiaoming Yuan 4 December, Abstract. The alternating direction method
More informationOn the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities
On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods
More informationSome Inexact Hybrid Proximal Augmented Lagrangian Algorithms
Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br
More informationPacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT
Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given
More informationContraction Methods for Convex Optimization and monotone variational inequalities No.12
XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationImproving an ADMM-like Splitting Method via Positive-Indefinite Proximal Regularization for Three-Block Separable Convex Minimization
Improving an AMM-like Splitting Method via Positive-Indefinite Proximal Regularization for Three-Block Separable Convex Minimization Bingsheng He 1 and Xiaoming Yuan August 14, 016 Abstract. The augmented
More informationOptimal Linearized Alternating Direction Method of Multipliers for Convex Programming 1
Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming Bingsheng He 2 Feng Ma 3 Xiaoming Yuan 4 October 4, 207 Abstract. The alternating direction method of multipliers ADMM
More informationIteration-complexity of first-order penalty methods for convex programming
Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)
More informationAccelerated primal-dual methods for linearly constrained convex problems
Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize
More informationA relaxed customized proximal point algorithm for separable convex programming
A relaxed customized proximal point algorithm for separable convex programming Xingju Cai Guoyong Gu Bingsheng He Xiaoming Yuan August 22, 2011 Abstract The alternating direction method (ADM) is classical
More information496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI
Journal of Computational Mathematics, Vol., No.4, 003, 495504. A MODIFIED VARIABLE-PENALTY ALTERNATING DIRECTIONS METHOD FOR MONOTONE VARIATIONAL INEQUALITIES Λ) Bing-sheng He Sheng-li Wang (Department
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.18
XVIII - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No18 Linearized alternating direction method with Gaussian back substitution for separable convex optimization
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationAn Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1
An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.11
XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics
More informationCONSTRAINED OPTIMALITY CRITERIA
5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationLinearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming.
Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming Bingsheng He Feng Ma 2 Xiaoming Yuan 3 July 3, 206 Abstract. The alternating
More informationAn Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems
An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems Bingsheng He Feng Ma 2 Xiaoming Yuan 3 January 30, 206 Abstract. The primal-dual hybrid gradient method
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationOn the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities
STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, June 204, pp 05 3. Published online in International Academic Press (www.iapress.org) On the Convergence and O(/N)
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationA Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem
A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a
More informationInexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization
J Optim Theory Appl 04) 63:05 9 DOI 0007/s0957-03-0489-z Inexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization Guoyong Gu Bingsheng He Junfeng Yang
More informationMcMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.
McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More information230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n
Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy
More informationApplication of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming
Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming Bingsheng He, Han Liu, Juwei Lu, and Xiaoming Yuan Abstract Recently, a strictly contractive
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationPriority Programme 1962
Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationSubgradient Methods in Network Resource Allocation: Rate Analysis
Subgradient Methods in Networ Resource Allocation: Rate Analysis Angelia Nedić Department of Industrial and Enterprise Systems Engineering University of Illinois Urbana-Champaign, IL 61801 Email: angelia@uiuc.edu
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More informationAn Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This
More informationA GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD
A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD OGANEDITSE A. BOIKANYO AND GHEORGHE MOROŞANU Abstract. This paper deals with the generalized regularization proximal point method which was
More informationA Unified Approach to Proximal Algorithms using Bregman Distance
A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationExpanding the reach of optimal methods
Expanding the reach of optimal methods Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with C. Kempton (UW), M. Fazel (UW), A.S. Lewis (Cornell), and S. Roy (UW) BURKAPALOOZA! WCOM
More informationA Proximal Method for Identifying Active Manifolds
A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the
More informationA GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM
A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM CAIHUA CHEN, SHIQIAN MA, AND JUNFENG YANG Abstract. In this paper, we first propose a general inertial proximal point method
More informationFinite Convergence for Feasible Solution Sequence of Variational Inequality Problems
Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationLINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION
LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION JUNFENG YANG AND XIAOMING YUAN Abstract. The nuclear norm is widely used to induce low-rank solutions for
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationDual and primal-dual methods
ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method
More informationProximal-like contraction methods for monotone variational inequalities in a unified framework
Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationJournal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.
Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification
More informationSparse Optimization Lecture: Dual Methods, Part I
Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration
More informationIteration-complexity of first-order augmented Lagrangian methods for convex programming
Math. Program., Ser. A 016 155:511 547 DOI 10.1007/s10107-015-0861-x FULL LENGTH PAPER Iteration-complexity of first-order augmented Lagrangian methods for convex programming Guanghui Lan Renato D. C.
More informationRecent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables
Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationarxiv: v1 [math.oc] 23 May 2017
A DERANDOMIZED ALGORITHM FOR RP-ADMM WITH SYMMETRIC GAUSS-SEIDEL METHOD JINCHAO XU, KAILAI XU, AND YINYU YE arxiv:1705.08389v1 [math.oc] 23 May 2017 Abstract. For multi-block alternating direction method
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationA Solution Method for Semidefinite Variational Inequality with Coupled Constraints
Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationKey words. alternating direction method of multipliers, convex composite optimization, indefinite proximal terms, majorization, iteration-complexity
A MAJORIZED ADMM WITH INDEFINITE PROXIMAL TERMS FOR LINEARLY CONSTRAINED CONVEX COMPOSITE OPTIMIZATION MIN LI, DEFENG SUN, AND KIM-CHUAN TOH Abstract. This paper presents a majorized alternating direction
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationarxiv: v1 [math.oc] 10 Apr 2017
A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation Tuan T. Nguyen, Mircea Lazar and Hans Butler arxiv:1704.03064v1 math.oc] 10 Apr 2017 Abstract
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More informationDual Proximal Gradient Method
Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method
More informationConvergence rate of inexact proximal point methods with relative error criteria for convex optimization
Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,
More informationDecision Science Letters
Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationComposite nonlinear models at scale
Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationSupport Vector Machine via Nonlinear Rescaling Method
Manuscript Click here to download Manuscript: svm-nrm_3.tex Support Vector Machine via Nonlinear Rescaling Method Roman Polyak Department of SEOR and Department of Mathematical Sciences George Mason University
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationConvergence of a Class of Stationary Iterative Methods for Saddle Point Problems
Convergence of a Class of Stationary Iterative Methods for Saddle Point Problems Yin Zhang 张寅 August, 2010 Abstract A unified convergence result is derived for an entire class of stationary iterative methods
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More information4y Springer NONLINEAR INTEGER PROGRAMMING
NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai
More informationSplitting methods for decomposing separable convex programs
Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationConvergence rate estimates for the gradient differential inclusion
Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient
More informationThe Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent
The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Yinyu Ye K. T. Li Professor of Engineering Department of Management Science and Engineering Stanford
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More informationarxiv: v2 [math.oc] 25 Mar 2018
arxiv:1711.0581v [math.oc] 5 Mar 018 Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming Yangyang Xu Abstract Augmented Lagrangian method ALM has been popularly
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS
ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS Anders FORSGREN Tove ODLAND Technical Report TRITA-MAT-203-OS-03 Department of Mathematics KTH Royal
More informationWorkshop on Nonlinear Optimization
Workshop on Nonlinear Optimization 5-6 June 2015 The aim of the Workshop is to bring optimization experts in vector optimization and nonlinear optimization together to exchange their recent research findings
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More information