On well definedness of the Central Path
|
|
- Avis Ward
- 5 years ago
- Views:
Transcription
1 On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP Brasil Abstract We study the well definedness of the central path for a linearly constrained convex programming problem with smooth objective function. We prove that, under standard assumptions, existence of the central path is equivalent to nonemptiness and boundedness of the optimal set. Other equivalent conditions are given. Moreover we show that, under an additional assumption on the objective function, the central path converges to the analytic center of the optimal set. Key Words. Convex programming, linear constraints, central path, logarithmic barrier function, analytic center. 1 Introduction In 1955 Frish Ref. 1 used for the first time in optimization the logarithmic barrier function. Later on, in the 60 s, Fiacco and McCormick Ref. 2 studied This author was supported by FAPERJ under Grant E-26/ /97 -Bolsa This author was partially supported by CNPq under the grant /93-9(RN) 1
2 the barrier function method (Sequential Unconstrained Minimization Technique) for (constrained) nonlinear programming. The method consists of finding the exact (unconstrained) minima of auxiliary functions for decreasing values of the barrier parameter. These auxiliary functions are defined in terms of the problem functions in such a way that they have a singularity at the boundary of the feasible set, forcing their minima to remain strictly feasible. The central path of the problem is the set of those minimizers. It is a curve along which the objective value decreases and its accumulation points are optimal solutions of the original problem. With the fast computer developments in the seventies researchers became interested in efficient implementations of interior point methods, i.e. methods which operate in the interior of the feasible region. An adequate notion of complexity for linear programming algorithms was given by Khachiyan in 1979, when he proposed the first polynomial algorithm for linear programming problems (see Ref. 3). In 1984 Karmarkar in his seminal paper Ref. 4 proposed the first competitive polynomial algorithm for linear programming, with lower complexity than Khachiyan s. His method performs in the relative interior of the feasible set far away from the boundary. Karmarkar s work produced a renewed interest in logarithm barrier methods. Ever since, the field of interior point methods has been extremely active and new polynomial algorithms where published not just for linear programming but also for convex quadratic and linear complementarity problems (see Refs. 5 10). In 1986 Renegar Ref. 11 developed the first polynomial path-following algorithm (in a maximization framework) for linear programming. In the last few years the path-following methods were widely studied. Roughly speaking, these methods generate points which lie close enough to the central path of the problem and enjoy nice convergence properties. In 1989 Megiddo Ref. 12 proved that the central path for linear programming problems ends in the analytic center of the optimal set (see also Refs. 13, 14). Iusem et al. (Ref. 15) extended this and some other interesting properties to the central paths defined by general barriers for variational inequality problems. For the convex quadratic programming problem it is well known that 2
3 under the Slater condition and standard assumptions, namely the existence of a strictly dual feasible point or the boundedness of the feasible set, the central path is well defined (see Refs. 8, 16). Den Hertog (Ref. 10, pp. 35) conjectures that as in the linear programming case the last assumption can be weakened to the boundedness of the optimal set. In this paper we prove Den Hertog s conjecture for a wider category of objective functions: the convex differentiable ones. Moreover, we show that for those problems (under the Slater assumption) the following three conditions are equivalent: boundedness of the optimal set, existence of the central path and existence of a strictly feasible dual point. We also prove that the central path converges to the analytic center of the optimal set under some additional assumptions on the objective function. These assumptions cover the self-concordant functions (see Ref. 17), in particular the linear and quadratic objective functions. This paper is organized as follows. In Section 2 we introduce notation, recall some results in convex analysis and prove our main theorem which states some necessary and sufficient conditions for the existence of the central path. In Section 3 some properties of the central path are discussed. Existence and optimality of cluster points are established. In Section 4 the convergence of the central path is considered; under rather general assumptions the end point of the trajectory is characterized as the analytic center of the optimal set. 2 Central Path Consider the convex linearly constrained problem (P) min f(x) s.t. Ax = b x 0. where f : R n R {+ } is a proper closed convex function, A R m n with m n and b R m. The effective domain of f will be denoted by ed(f), 3
4 i.e. ed(f) = {x R n f(x) < + }. We make the following assumptions: Assumptions (A1) ed(f) is an open set, (A2) f is differentiable on ed(f), (A3) A is a full row rank matrix, i.e. rank(a) = m n, (A4) There exists an interior feasible point x in ed(f), i.e. A x = b, x > 0 and f( x) < +. ed(f), i.e. A x = b, x > 0 and f( x) < +. Note that since f is a proper closed convex function, from these assumptions it follows that f diverges to + on the boundary of ed(f) and also that f is continuously differentiable on ed(f). The Wolfe s dual problem corresponding to P is (D) max f(x) y T (Ax b) z T x s.t. A T y + z = f(x) z 0. A feasible point (x, y, z) for D, with z > 0 is called an interior dual feasible solution. The logarithmic barrier function method applied to P generates the family of problems (P µ ) min f(x) µ n j=1 log x j s.t. Ax = b x > 0. where µ > 0 is the barrier penalty parameter. Observe that the minimand in P µ is a strictly convex function, and so Problem P µ has no more than one (global) minimizer, which is characterized by the Karush-Kuhn-Tucker conditions: Ax b = 0, x > 0 (1a) A T y + z f(x) = 0, z > 0 (1b) 4
5 Zx µe = 0, (1c) where Z is the diagonal matrix with the components of vector z on the main diagonal and e is the n-vector of all ones. The solution of system (1) is called the central point corresponding to µ > 0. It is denoted by (x(µ), y(µ), z(µ)). For each µ > 0 well definedness of the central point depends on the existence and uniqueness of the above mentioned solution. The central path associated with Problem P is given by {(x(µ), y(µ), z(µ)) µ > 0}. Observe that the central path is interior primal-dual feasible. In order to study well definedness of the central path, we define the function Φ µ by f(x) µ n Φ µ (x) = j=1 log x j, if Ax = b, x > 0 and x ed(f), +, otherwise, and the function f by f(x), f(x) = +, if Ax = b, x 0 and x ed(f), otherwise. It s easy to see that Φ µ and f are both proper closed convex functions and moreover, that Φ µ is strictly convex in its effective domain. From now on Γ(α, µ) and Γ α will stand for the Φ µ and f-level sets respectively, corresponding to α R, i.e. and Γ(α, µ) = {x R n Φ µ (x) α} Γ α = {x R n f(x) α}. Both Γ(α, µ) and Γ α are closed (convex) subsets of R n because Φ µ and f are closed (convex) functions. We recall that the recession cone of a convex set C R n is given by O + C = {v R n C + tv C, for all t 0}. 5
6 We also recall some very well known results in convex analysis which we will use in the sequel. Lemma 2.1 A nonempty closed convex set C in R n is bounded if and only if its recession cone O + C consists of the zero vector alone. Proof. See Theorem 8.4, Ref. 18. Lemma 2.2 The nonempty level sets of a closed proper convex function are either all bounded or all unbounded. Proof. See Corollary 8.7.1, Ref 18. Corollary 2.1 A closed proper convex function has nonempty bounded level sets if and only if the set of its (unconstrained) minimizers is nonempty and bounded. Proof. The result follows from Lemma 2.2 and compactness arguments. Now we will prove our main theorem which gives necessary and sufficient conditions for the well definedness of the central path associated with Problem P. Theorem 2.1 The following conditions are equivalent: (C1) The solution set of Problem P, Sol(P), is nonempty and bounded. (C2) The central path {(x(µ), y(µ), z(µ)) µ > 0} is well defined. (C3) For some µ 0 > 0 the central point (x(µ 0 ), y(µ 0 ), z(µ 0 )) is well defined. (C4) There exists an interior dual feasible point ( x, ȳ, z) R n R m R n, i.e. A T ȳ + z = f( x) and z > 0. 6
7 Proof. Suppose that Condition C1 holds. We will show that Condition C2 is true. Take µ > 0. Using x as in Assumption A4, define α = Φ µ ( x). We claim that O + Γ( α, µ) = {0}. Let v R n be an element of O + Γ( α, µ). For all x Γ( α, µ) and t 0 it holds that Φ µ (x + tv) α < +, so, in view of the definition of Φ µ, it also holds that A(x + tv) = b, (2a) x + tv 0 (2b) and α Φ µ (x + tv) = f(x + tv) µ n j=1 log(x j + tv j ) f(x) + t f(x) T v µ n j=1 log(x j + tv j ), where the second inequality holds since f is convex. Therefore f(x) T v 0, for all x Γ( α, µ) is true, because v j 0 by the feasibility of x+tv, which follows from (2), and the fact that the logarithm grows slower than a linear function of t. Thus, since x + tv Γ( α, µ) for all t 0, we have f( x + tv) T v 0, for all t 0. Hence f( x + tv) is a nonincreasing function of t 0, so f( x + tv) f( x), for all t 0. (3) Recall that x Γ( α, µ). So using (2) with x = x it follows that x + tv is feasible for all t 0. From the feasibility of { x + tv t 0} and (3) we conclude that { x + tv t 0} Γ β, where β = f( x). On the other hand, by Condition C1 and Lemma 2.2, the nonempty set Γ β is bounded, therefore v = 0 and so O + Γ( α, µ) = {0}. 7
8 In view of Lemma 2.1 it follows that Γ( α, µ) is a (nonempty) bounded set. We conclude that Φ µ attains its minimum x(µ), which is unique due to the strict convexity of Φ µ in its effective domain. Note that by Assumption A3 matrix AA T is nonsingular. Therefore, taking and z(µ) = µx 1 (µ)e y(µ) = (AA T ) 1 A( f(x(µ)) z(µ)) we see that (x(µ), y(µ), z(µ)) is the unique solution of system (1). So Condition C2 holds. Condition C3 is an obvious consequence of Condition C2. If Condition C3 holds, then (x(µ 0 ), y(µ 0 ), z(µ 0 )) satisfies system (1) for µ = µ 0. So Condition C4 is true with ( x, ȳ, z) = (x(µ 0 ), y(µ 0 ), z(µ 0 )). Now assume that Condition C4 holds and let us verify Condition C1. Let x R n be such that Ax = b, x 0. (4) Then using the convexity of f, (4) and Condition C4 we obtain f(x) f( x) + f( x) T (x x) = f( x) + (A T ȳ + z) T (x x) = f( x) (A T ȳ + z) T x + b T ȳ + z T x. (5) From Condition C4 we see that there exist a σ R such that z i σ > 0, for i = 1,, n. (6) Combining (5) and (6) we obtain f(x) = f(x) K + σ x 1, (7) 8
9 where K = f( x) (A T ȳ + z) T x + b T ȳ and x 1 = n i=1 x i. Hence, from (7) and Lemma 2.2 we conclude that all level sets of f are bounded. Since f is a closed proper convex function, from Corollary 2.1 it follows that its minimizers set Sol(P) is nonempty and bounded. 3 Features of the Central Path For the sake of completeness we will discuss in this section some properties of the central path. From now on we suppose that Sol(P) is a nonempty and bounded subset of R n. Then it follows from Theorem 2.1 that the central path is well defined. In the next proposition we study the behavior of the logarithm barrier and the primal objective function along the primal path. Proposition 3.1 Let h be the logarithmic barrier, i.e., If 0 < µ 1 < µ 2, then and n h(x) = log x j. j=1 h(x(µ 2 )) h(x(µ 1 )), f(x(µ 1 )) f(x(µ 2 )). Proof. Denote x i = x(µ i ), for i = 1, 2. Then x i = arg min Ax=b, x>0 f(x) + µ i h(x), for i = 1, 2. So f(x 1 ) + µ 1 h(x 1 ) f(x 2 ) + µ 1 h(x 2 ), (8) f(x 2 ) + µ 2 h(x 2 ) f(x 1 ) + µ 2 h(x 1 ). (9) Adding up (8) and (9) we obtain 0 (µ 2 µ 1 )(h(x 1 ) h(x 2 )). 9
10 Therefore h(x 2 ) h(x 1 ), (10) since µ 1 < µ 2. Now combining (8) and (10) we see that f(x 1 ) f(x 2 ). Our next result gives some useful information about the central path when the parameter µ is bounded. Proposition 3.2 For all µ > 0 the set {(x(µ), y(µ), z(µ)) 0 < µ µ} is bounded. Proof. Take µ > 0. Define From Proposition 3.1 it follows that ᾱ = f(x( µ)). {x(µ) 0 < µ µ} Γᾱ. (11) So {x(µ) 0 < µ µ} is a bounded subset of R n, because Γᾱ is bounded, which in turn is a consequence of the boundedness of Sol(P) and Corollary 2.1. Now we will prove that {(y(µ), z(µ)) 0 < µ µ} is a bounded subset too. Consider 0 < µ µ. We know that (x(µ), y(µ), z(µ)) solves equation (1b), so 0 = x L(x(µ), y(µ), z(µ)), (12) where L(x, y, z) = f(x) y T (Ax b) z T x and x stands for the gradient with respect to the x variables. From (12) and the convexity of L(, y(µ), z(µ)) we have that x(µ) arg min L(x, y(µ), z(µ)). (13) x Rn If we take x as in Assumption A4, then f(x(µ)) nµ = L(x(µ), y(µ), z(µ)) 10 L( x, y(µ), z(µ)) = f( x) z(µ) T x. (14)
11 The equalities above hold due to equation (1c), the primal feasibility of x(µ) and x, while the inequality is a consequence of (13). Now let τ > 0 be the smallest component of x and let f be the optimal value of Problem P. From (14) we obtain that z(µ) 1 f( x) f + nµ, where z(µ) 1 = τ So {z(µ) 0 < µ µ} is also a bounded set. Finally, from equation (1b) we get that n z j (µ). (15) j=1 A T y(µ) = f(x(µ)) z(µ), for all µ > 0. (16) Note that in view of (11) the topological closure of {x(µ) 0 < µ µ} is a compact set contained in ed(f). Using the fact that f is a continuously differentiable function in its effective domain, and the full row rank Assumption A3, we conclude from (16) that {y(µ) 0 < µ µ} is bounded too. Therefore {(x(µ), y(µ), z(µ)) 0 < µ µ} is a bounded subset of R n R m R n. We say that ( x, ȳ, z) R n R m R n is a cluster point of the central path if there exists a sequence {µ k } R ++ such that lim k µ k = 0 and lim k (x(µ k ), y( u k ), z(µ k )) = ( x, ȳ, z). We know, from Proposition 3.2, that the set of cluster points of the central path is nonempty. Now we will prove that the cluster points are solutions of the primal-dual pair of Problems P and D, i.e., if ( x, ȳ, z) is a cluster point, then x Sol(P) and ( x, ȳ, z) Sol(D). Proposition 3.3 All cluster points of the central path are optimal solutions of the primal-dual pair of Problems P and D. Proof. Assume that ( x, ȳ, z) is a cluster point of the central path and that {µ k } R ++ is such that lim k µ k = 0 and lim k (x k, y k, z k ) = ( x, ȳ, z), with (x k, y k, z k ) = (x(µ k ), y(µ k ), z(µ k )). Since both the primal and the dual feasible sets are closed, we conclude that x and ( x, ȳ, z) are feasible for P and D respectively. 11
12 The gap function g at primal-dual feasible points of the form (x, (x, y, z)) is defined by g(x, (x, y, z)) = z T x. In order to prove optimality we just need to check that the gap function g vanishes at ( x, ( x, ȳ, z)). We have that g(x k, (x k, y k, z k )) = nµ k, for k = 1, 2, (17) Letting k in (17) we see that g( x, ( x, ȳ, z)) = 0. 4 Convergence of the Primal Central Path In this section we will prove that, under an additional assumption on the objective function of Problem P, the primal central path converges, i.e. there exists lim µ 0 x(µ); moreover this limit point is completely characterized as the analytic center of the solution set Sol(P). We assume that the objective function f of Problem P satisfies the assumptions of Section 2 and also that it is twice continuously differentiable in its effective domain. Furthermore we assume that there exist a subspace W of R n such that Ker( 2 f(x)) = W, (18) for all x ed(f). We remark that, under our new smoothness condition for f, the function which takes µ > 0 to the central point (x(µ), y(µ), z(µ)) is continuously differentiable. This can be proved by applying Implicit Function Theorem. The analytic center of the optimal set is defined as the (unique) solution of min j J log x j (19a) s.t. x ri(sol(p)), (19b) where J = {j {1,..., n} x Sol(P) s.t. x j > 0} and ri(sol(p)) stands for the relative interior of Sol(P). 12
13 Observe that from the convexity of Sol(P) it follows that ri(sol(p)) = {x Sol(P) x J > 0}. For the particular case in which Sol(P)= {0}, we have J = and by convention j J log x j 0. Note that the objective function in (19a) is strictly convex on ri(sol(p)) and diverges on the relative boundary of Sol(P); so under our boundedness assumption on Sol(P), the analytic center is well defined. We prove now the existence of lim µ 0 x(µ) and its optimality property. Theorem 4.1 The primal central path converges as µ 0 to the analytic center of Sol(P). Proof. The case of interest is when J is nonempty. By Proposition 3.2, {x(µ) 0 < µ} has cluster points. Let x be one of them and {µ k } R ++ a sequence such that lim k µ k = 0 and lim k x(µ k ) = x. Denote x k = x(µ k ). Let ˆx be the solution of (19). We will prove that x J > 0 and log x j log ˆx j. j J j J Define ˆx k = ˆx + x k x for all k. We claim that for k large enough ˆx k is a strictly feasible solution for Problem P. Observe that x Sol(P) in view of Proposition 3.3. Since ˆx, x k and x are feasible for P, we must have Aˆx k = b for all k. Now consider j / J. It follows from the definition of J that ˆx j = x j = 0. Thus ˆx k j = x k j (j / J), (20) and ˆx k j > 0 because x k > 0. For j J, we have ˆx k j > 0 for k large enough since lim k x k j = x j and ˆx j > 0. We conclude that ˆx k > 0 for k large enough. Now we will prove that f(ˆx k ) = f(x k ). Consider f( x + t(ˆx x)) as a function of t [0, 1]. Since ˆx and x belong to the convex set Sol(P), it follows that f( x + t(ˆx x)) is constant. Hence f( x) T (ˆx x) = 0, (ˆx x) T 2 f( x)(ˆx x) = 0. (21) 13
14 Since 2 f( x) is symmetric positive semidefinite, from the last equality above it follows that 2 f( x)(ˆx x) = 0. (22) In view of (22) and (18) we have that On the other hand we have f(x k ) = f( x) + 1 ˆx x W. (23) 0 2 f( x + t(x k x))(x k x)dt. (24) Since the Hessian of f is a symmetric matrix, the image of 2 f(x) is orthogonal to Ker( 2 f(x)) for all x ed(f). Therefore using (18) we conclude that the integrand vector above is orthogonal to W ; so using (21), (23) and (24) we obtain f(x k ) T (ˆx x) = 0. (25) Finally from the gradient inequality we get f(ˆx k ) f(x k ) + f(x k ) T (ˆx k x k ) = f(x k ) + f(x k ) T (ˆx x) = f(x k ), (26) where the equalities hold in view of the definition of ˆx k and (25). Similarly, interchanging the roles of x k and ˆx k we can see that f(x k ) f(ˆx k ), which together with (26) gives us f(ˆx k ) = f(x k ). (27) Now since Aˆx k = b and ˆx k > 0 for k large enough, from the optimality property of x k we get Φ µk (x k ) Φ µk (ˆx k ), for k large enough. (28) Combining (20), (27) and (28) we obtain log x k j log ˆx k j, for k large enough. (29) j J j J 14
15 Taking limits as k goes to infinity in (29) we conclude that x j > 0 for j J and log x j log ˆx j <, (30) j J j J since lim k x k = x, lim k ˆx k = ˆx and ˆx J > 0. Since ˆx is unique, we conclude that x = ˆx and all cluster points of {x(µ) µ > 0} are equal to the analytic center of Sol(P), so lim µ 0 x(µ) exists and solves problem (19). References 1. Frish, K. R., The Logarithmic Potential Method of Convex Programming, Memorandum, Institute of Economics, University of Oslo, Norway, Fiacco, A., and McCormick, G. P., Nonlinear Programming: Sequential Unconstrained Techniques, SIAM Publications, Philadelphia, Pennsylvania, Khachiyan, L. G., A Polynomial Algorithm for Linear Programming, Soviet Mathematics Doklady, Vol. 20, pp , Karmarkar, N., A New Polynomial Time Algorithm for Linear Programming, Combinatorica, Vol.4, pp , Gonzaga, C., An Algorithm for Solving Linear Programming Problems in O(n 3 L) Operations, Progress in Mathematical Programming-Interior Point and Related Methods, Edited by N.Megiddo, Springer Verlag, New York, New York, pp. 1-28, De Ghellinck, G., and Vial, J.P., A Polynomial Newton Method for Linear Programming, Algorithmica, Vol. 1, pp , Monteiro, R. D. C., and Adler, I., Interior Path-Following Primal-Dual Algorithms, Part 1: Linear Programming, Mathematical Programming, Vol. 44, pp ,
16 8. Monteiro, R. D. C., and Adler, I., Interior Path-Following Primal-Dual Algorithms, Part 2: Convex Quadratic Programming, Mathematical Programming, Vol. 44, pp , Kojima, M., Mizuno, S., and Yoshise, A., A Polynomial-Time Algorithm for a Class of Linear Complementarity Problems, Mathematical Programming, Vol. 44, pp. 1-26, Den Hertog, D., Interior Point Approach to Linear, Quadratic and Convex Programming: Algorithms and Complexity, Kluwer Academic Publishers, Boston, Massachusetts, Renegar, J., A Polynomial-Time Algorithm Based on Newton s Method for Linear Programming, Mathematical Programming, Vol. 40, pp , Megiddo, N., and Schub, M., Boundary Behavior of Interior-Point Algorithms in Linear Programming, Mathematics of Operations Research, Vol. 14, pp , Bayer, D., and Lagarias, J. C., The Nonlinear Geometry of Linear Programming: (i) Affine and Projective Scaling Trajectories, (ii) Legendre Transform Coordinates, (iii) Central Trajectories, preprint, AT&Bell Laboratories, Murray Hill, New Jersey, Sonnevend, G., An Analytic Center for Polyhedrons and New Classes of Global Algorithms for Linear (Smooth, Convex) Programming, Lecture Notes Control Information Sciences, Springer Verlag, New York, New York, Vol. 84 pp , Iusem, A. N., Svaiter, B. F., and Da Cruz Neto, J. X., Central Paths, Generalized Proximal Point Methods, and Cauchy Trajectories in Riemann Manifolds. 16. Megiddo, N., Pathways to the Optimal Set in Linear Programming, Progress in Mathematical Programming-Interior Point and Related Methods, Edited by N. Megiddo, Springer Verlag, New York, New York, pp ,
17 17. Nesterov, Y., Nemirovskii, A., Interior-Point Polynomial Algorithms in Convex Programming, SIAM Publications, Philadelphia, Pennsylvania, Rockafellar, R. T., Convex Analysis, Princeton University Press, Princeton, New Jersey,
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationKantorovich s Majorants Principle for Newton s Method
Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,
More informationPacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT
Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given
More informationarxiv: v1 [math.fa] 16 Jun 2011
arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior
More informationIBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel
and Nimrod Megiddo IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia 95120-6099 and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel Submitted by Richard Tapia ABSTRACT
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationThe effect of calmness on the solution set of systems of nonlinear equations
Mathematical Programming manuscript No. (will be inserted by the editor) The effect of calmness on the solution set of systems of nonlinear equations Roger Behling Alfredo Iusem Received: date / Accepted:
More informationarxiv: v1 [math.na] 25 Sep 2012
Kantorovich s Theorem on Newton s Method arxiv:1209.5704v1 [math.na] 25 Sep 2012 O. P. Ferreira B. F. Svaiter March 09, 2007 Abstract In this work we present a simplifyed proof of Kantorovich s Theorem
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationJournal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005
Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION Mikhail Solodov September 12, 2005 ABSTRACT We consider the problem of minimizing a smooth
More informationCentral Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds
Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds da Cruz Neto, J. X., Ferreira, O. P., Oliveira, P. R. and Silva, R. C. M. February
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationOn the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean
On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity
More informationLP. Kap. 17: Interior-point methods
LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of
More informationHessian Riemannian Gradient Flows in Convex Programming
Hessian Riemannian Gradient Flows in Convex Programming Felipe Alvarez, Jérôme Bolte, Olivier Brahic INTERNATIONAL CONFERENCE ON MODELING AND OPTIMIZATION MODOPT 2004 Universidad de La Frontera, Temuco,
More informationSOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction
ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationSupplement: Hoffman s Error Bounds
IE 8534 1 Supplement: Hoffman s Error Bounds IE 8534 2 In Lecture 1 we learned that linear program and its dual problem (P ) min c T x s.t. (D) max b T y s.t. Ax = b x 0, A T y + s = c s 0 under the Slater
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationSequential Unconstrained Minimization: A Survey
Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationAn Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationUpper sign-continuity for equilibrium problems
Upper sign-continuity for equilibrium problems D. Aussel J. Cotrina A. Iusem January, 2013 Abstract We present the new concept of upper sign-continuity for bifunctions and establish a new existence result
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationSome Inexact Hybrid Proximal Augmented Lagrangian Algorithms
Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationSome new facts about sequential quadratic programming methods employing second derivatives
To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More informationOn the Existence and Convergence of the Central Path for Convex Programming and Some Duality Results
Computational Optimization and Applications, 10, 51 77 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On the Existence and Convergence of the Central Path for Convex
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationA hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities
A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities Renato D.C. Monteiro Mauricio R. Sicre B. F. Svaiter June 3, 13 Revised: August 8, 14) Abstract
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationOptimization: Then and Now
Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationSWFR ENG 4TE3 (6TE3) COMP SCI 4TE3 (6TE3) Continuous Optimization Algorithm. Convex Optimization. Computing and Software McMaster University
SWFR ENG 4TE3 (6TE3) COMP SCI 4TE3 (6TE3) Continuous Optimization Algorithm Convex Optimization Computing and Software McMaster University General NLO problem (NLO : Non Linear Optimization) (N LO) min
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationA Simpler and Tighter Redundant Klee-Minty Construction
A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationOn the convergence properties of the projected gradient method for convex optimization
Computational and Applied Mathematics Vol. 22, N. 1, pp. 37 52, 2003 Copyright 2003 SBMAC On the convergence properties of the projected gradient method for convex optimization A. N. IUSEM* Instituto de
More informationA Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region
A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationAN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.
AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised July 8, 1999) ABSTRACT We present a
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationA New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization
A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University
More informationGEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III
GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationFIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER
More informationMerit functions and error bounds for generalized variational inequalities
J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada
More informationInterior-point algorithm for linear optimization based on a new trigonometric kernel function
Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationAN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)
AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised March 12, 1999) ABSTRACT We present
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationA double projection method for solving variational inequalities without monotonicity
A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014
More informationSummary Notes on Maximization
Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationA QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING
A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the
More informationLECTURE 10 LECTURE OUTLINE
LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,
More informationOn Total Convexity, Bregman Projections and Stability in Banach Spaces
Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,
More informationA globally convergent Levenberg Marquardt method for equality-constrained optimization
Computational Optimization and Applications manuscript No. (will be inserted by the editor) A globally convergent Levenberg Marquardt method for equality-constrained optimization A. F. Izmailov M. V. Solodov
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More informationNonsymmetric potential-reduction methods for general cones
CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationAn Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1
An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied
More informationA Potential Reduction Method for Harmonically Convex Programming 1
A Potential Reduction Method for Harmonically Convex Programming 1 J. F. Sturm 2 and S. Zhang 3 Communicated by Z.Q. Luo 1. The authors like to thank Dr. Hans Nieuwenhuis for carefully reading this paper
More information