Advanced Continuous Optimization

Size: px
Start display at page:

Download "Advanced Continuous Optimization"

Transcription

1 University Paris-Saclay Master Program in Optimization Advanced Continuous Optimization J. Ch. Gilbert (INRIA Paris-Rocquencourt) September 26, 2017 Lectures: September 18, 2017 November 6, 2017 Examination: November 13, 2017 Keep a close eye to Outline 1 Background Convex analysis Nonsmooth analysis Optimization 2 Optimality conditions First order optimality conditions for (P G ) 3 Linearization methods The Josephy-Newton algorithm for inclusions The SQP algorithm in constrained optimization The semismooth Newton method for equations 4 Semidefinite optimization Problem definition Existence of solution and optimality conditions An interior point algorithm 2 / 110

2 Preliminaries Outline of the course Three parts (60h) 1 Advanced Continuous Optimization I (30h) Theory and algorithms in smooth nonconvex optimization Goal: improve the basic knowledge in optimization with some advanced concepts and algorithms Period: September, 18 November, 6 (examination: November, 13) 2 Advanced Continuous Optimization II (20h) Implementation of algorithms (in Matlab on an easy to implement concrete problem) Goal: improve algorithm understanding Period: November, 20 December, 18 (examination: January,??) 3 Specialized course by Jacek Gondzio (10h) 3 / 110 Preliminaries Some features of the course Smooth nonconvex optimization Dealing with the general optimization problem { infx E f(x) (P G ) c(x) G, where f : E R, c : E F, and G is a closed convex set in F. The latter includes the nonlinear optimization problem { infx R f(x) (P EI ) c E (x) = 0 and c I (x) 0, where f : E R, E and I form a partition of [1:m], c E : E R m E, and c I : E R m I are smooth (possibly non convex) functions, the linear semidefinite optimization problem { infx S n C,X (P SDO ) A(X) = b and X 0, where C S n, A : S n R m is linear, and b R m. 4 / 110

3 Preliminaries Some features of the course Finite dimension: E and F are Euclidean vector spaces The unit closed ball is compact A bounded sequence has a converging subsequence Nothing is said about optimal control problems ( see other courses) Emphasis is put on theory optimality conditions of problem (PG ) and many other topics, and algorithms their analysis (scope, convergence, speed of convergence, strong and weak features,... ). 5 / 110 Preliminaries Outline of the course Advanced Continuous Optimization I Anticipated program (we ll see...) 1 Background in convex analysis and optimization Convex analysis: dual cone, Farkas lemma, asymptotic cone,... Nonsmooth analysis: multifunction, subdifferential,... Optimization: optimality conditions,... 2 Abstract optimality conditions First order optimality conditions for (PG ) Second order optimality conditions for (PEI ) (or (P G )?) 3 Linearization methods (Newton) Josephy-Newton algorithm for nonlinear inclusions SQP algorithm for (PEI ) Semismooth Newton algorithm for nonlinear equations 6 / 110

4 Preliminaries Outline of the course Advanced Continuous Optimization I 4 Proximal and augmented Lagrangian methods Proximal algorithm for solving inclusions and optimization problems Augmented Lagrangian algorithm in optimization Application to convex quadratic optimization (error bound and global linear convergence) 5 Semidefinite optimization theory (optimality conditions, duality,... ), algorithm (interior point methods,... ). 7 / 110 Preliminaries Outline of the course Advanced Continuous Optimization II Goal: mastering the implementation of an algorithm Choose one of the following topics (20h) 1 Implementation of an SQP algorithm, application to the hanging chain problem 2 Implementation of an SDO algorithm, application to the OPF problem (Rank relaxation of its QCQP modelling) 8 / 110

5 Background Convex analysis (notation) R := R {,+ }. R + := {t R : t 0} and R ++ := {t R : t > 0}. B, B: open and closed unit balls centered at the origin; for r 0: B(x,r) = x +rb and B(x,r) = x +r B. E, F, G usually denote Euclidean vector spaces. 10 / 110 Background Convex analysis (relative interior I) The affine hull of P E is the smallest affine space containing P: affp := {A : A is an affine space containing P}. The relative interior of P E is its interior in affp: rip := {x P : r > 0 such that [B(x,r) affp] P}. In finite dimension, there holds C convex and nonempty = { ric, affc = aff(ric). 11 / 110

6 Background Convex analysis (relative interior II) Proposition (relative interior criterion) Let C be a nonempty convex set and x E. Then x ric and y C = [x,y[ ric. x ric x 0 C (or affc), t > 1 : (1 t)x 0 +tx C. Let C be a nonempty convex set. Then ri C is convex, C is convex and affc = affc, ri C = C and ri C = ri C (i.e., the last operation prevails). A point x C is said absorbing if d E, t > 0 such that x +td C. x intc x is absorbing. 12 / 110 Background Convex analysis (dual cone and Farkas lemma) The (positive) dual cone of a set P E is defined by P + := {d E : d,x 0, x P}. The (negative) dual cone of a set P is P := P +. Lemma (Farkas, generalized) Let E and F be two Euclidean spaces, A : E F a linear map, and K a nonempty convex cone of E. Then A(K) = {y F : A y K + } +. A : F E is defined by: (x,y) E F, A y,x = y,ax. One cannot get rid of the closure on A(K) in general. If K is polyhedral, then A(K) is polyhedral, hence closed. For K = E, one recovers R(A) = N(A ). 13 / 110

7 Background Convex analysis (tangent and normal cones) Tangent cone Let C be a convex set of E and x C. The cone of feasible directions for C at x is T f x C := R + (C x). The tangent cone to C at x is the closure of the previous one T x C T C (x) = R + (C x). Normal cone Let C be a convex set of E and x C. The normal cone to C at x is There hold N x C N C (x) = {d E : x x,d 0, x C}. N x C = (T x C) and T x C = (N x C). 14 / 110 Background Convex analysis (convex polyhedron) A convex polyhedron in E is a set of the form P = {x E : Ax b}, where A : E R m is a linear map and b R m. It is a closed set. For x P, define I(x) := {i [1:m] : (Ax b) i = 0}. If T : E F is linear, then T(P) is a convex polyhedron. If P 1 and P 2 are polyhedra, then P 1 +P 2 is a polyhedron. T x P = T f x P = {d E : (Ad) I(x) 0}. I(x 1 ) I(x 2 ) = T x1 P T x2 P. N x P = cone{a e i : i I(x)}. I(x 1 ) I(x 2 ) = N x1 P N x2 P. 15 / 110

8 Background Convex analysis (asymptotic cone I) Let E be a vector space of finite dimension and C be a nonempty closed convex set of E. The asymptotic cone of C is C := {d E : C +R + d C} = {d E : C +d C}. Properties C is closed. For any x C: C = {d E : x +R + d C} = t>0 = C x t { d E : {x k } C, {t k } such that x k t k d Boundedness by calculation: C is bounded C = {0}. }. 16 / 110 Background Convex analysis (asymptotic cone II) Two calculation rules (there are many more) If K, then K is a closed convex cone K = K. For an arbitrary collection {Ci } i I of closed convex sets C i with nonempty intersection: ( i I C i ) = i I C i. Example Let A : E F and B : E G be linear maps, a F, b G, K G be a closed convex cone, and P = {x E : Ax = a, Bx b +K}. Then P = {d E : Ad = 0, Bd K}. 17 / 110

9 Background Nonsmooth analysis (multifunction I) A multifunction T (or set-valued mapping) between two sets E and F is a function from E to P(F), the set of the subsets of F. Notation: T : E F : x T(x) F. Same concept as a binary relation (i.e., the data of a part of E F). A graph, the domain, the range of T : E F are defined by G(T) := {(x,y) E F : y T(x)}, D(T) := {x E : (x,y) G(T) for some y F} = π E G(T), R(T) := {y F : (x,y) G(T) for some x E} = π F G(T). The image of a part P E by T is T(P) := x P T(x). 19 / 110 Background Nonsmooth analysis (multifunction II) The inverse of a multifunction T : E F (it always exists!) is the multifunction T 1 : F E defined by T 1 (y) := {x E : y T(x)}. Hence y T(x) x T 1 (y). When E, F are topological/metric spaces, a multifunction T : E F is said to be closed at x E if y T(x) when (xk,y k ) G(T) converges to (x,y), closed if G(T) is closed in E F (i.e., T is closed at any x E). upper semicontinuous at x E if ε > 0, δ > 0, x x +δb, there holds T(x ) T(x)+εB. 20 / 110

10 Background Nonsmooth analysis (multifunction III) When E, F are vector spaces, a multifunction T : E F is said to be convex if G(T) is convex in E F. This is equivalent to saying that (x 0,x 1 ) E 2 and t [0, 1]: Note that T((1 t)x 0 +tx 1 ) (1 t)t(x 0 )+tt(x 1 ). T convex and C convex in E = T(C) convex in F. 21 / 110 Background Optimization (a generic problem) One considers the generic optimization problem { min f(x) (P X ) x X. where f : E R (E is a Euclidean vector space), X is a set of E (possibly nonconvex). 23 / 110

11 Background Optimization (tangent cone to a nonconvex set) A direction d E is tangent to X E at x X if {x k } X, {t k } 0 : x k x t k d. The tangent cone to X at x is the set of tangent directions. It is denoted by T x X or T X (x). Properties Let x X. T x X is closed. X is convex = T x X is convex and T x X = R + (X x). 24 / 110 Background Optimization (Peano-Kantorovich NC1) Theorem (Peano-Kantorovich NC1) If x is a local minimizer of (P X ) and f is differentiable at x, then f(x ) (T x X) +. The gradient of f at x is denoted by f(x) E and is defined from the derivative f (x) by d E : f(x ),d = f (x) d. 25 / 110

12 Background Optimization (NSC1 for convex problems) Recall that a convex function f : E R {+ } has directional derivatives f (x;d) R for all x domf and all d E. Proposition (NSC1 for a convex problem) Suppose that X is convex, f is convex on X, and x X. Then x is a global solution to (P X ) if and only if x X : f (x ;x x ) 0. Proof. Straightforward, using the convexity inequality x X : f(x) f(x )+f (x ;x x ). 26 / 110 Background Optimization (problem (P EI )) A generic form of the nonlinear optimization problem: inf x f(x) (P EI ) c E (x) = 0 c I (x) 0, where f : E R, E and I form a partition of [1:m], c E : E R m E, and c I : E R m I are smooth (possibly non convex) functions. The feasible set is denoted by X EI := {x E : c E (x) = 0, c I (x) 0}. We say that an inequality constraint is active at x X EI if c i (x) = 0. The set of indices of active inequality constraints is denoted by I 0 (x) := {i I : c i (x) = 0}. (P EI ) is said to be convex if f is convex, c E is affine, and c I is componentwise convex. 27 / 110

13 Background Optimization (problem (P EI ) NC1 or KKT conditions) Theorem (NC1 for (P EI ), Karush-Kuhn-Tucker (KKT)) If x is a local minimum of (P EI ), if f and c = (c E,c I ) are differentiable at x, and if c is qualified for representing X EI at x in the sense (2) below, then there exists a multiplier λ R m such that Some explanations. x l(x,λ ) = 0, c E (x ) = 0, The Lagrangian of (P EI ) is the function (1a) (1b) 0 (λ ) I c I (x ) 0. (1c) l : (x,λ) E R m l(x,λ) = f(x)+λ T c(x). The complementarity condition (1c) means (λ ) I 0, (λ ) T I c I (x ) = 0, and c I (x ) / 110 Background Optimization (problem (P EI ) constraint qualification) The tangent cone T x X EI is always contained in the linearizing cone T x X EI := {d E : c E (x) d = 0, c I 0 (x)(x) d 0}. It is said that the constraint c is qualified for representing X EI at x if T x X EI = T x X EI. (2) Sufficient conditions of qualification: continuity and/or differentiability and one of the following (CQ-A) ce I 0 (x) is affine near x (Affinity), (CQ-S) ce is affine, c I 0 (x) is componentwise convex, ˆx X EI such that c I 0 (x)(ˆx) < 0 (Slater), (CQ-LI) i E I 0 (x) α i c i (x) = 0 = α = 0 (Linear Independence), (CQ-MF) i E I 0 (x) α i c i (x) = 0 and α I0 (x) 0 = α = 0 (Mangasarian-Fromovitz). 29 / 110

14 Background Optimization (problem (P EI ) more on constraint qualification) Proposition (other forms of (CQ-MF)) Suppose that c E I0 (x) is differentiable at x X EI. Then the following properties are equivalent: i (CQ-MF) holds at x, ii v R m, d E: c E (x) d = v E and c I 0 (x) (x) d v I 0 (x), iii c E (x) is surjective and d E: c E (x) d = 0 and c I 0 (x)(x) d < / 110 Optimality conditions First order optimality conditions for (P G ) (the problem) [WP.fr] We consider the problem (P G ) { min f(x) c(x) G, where f : E R (E is a Euclidean vector space), c : E F (F is another Euclidean vector space), G is nonempty closed convex set in F, The feasible set is denoted by X G := {x E : c(x) G} = c 1 (G). (P G ) is said to be convex if f is convex, T : E F : x c(x) G is convex (then XG is convex). 32 / 110

15 Optimality conditions First order optimality conditions for (P G ) (tangent and linearizing cones, qualification) Proposition (tangent and linearizing cones) If c is differentiable at x X G, then T x X G T x X G := {d E : c (x) d T c(x) G}. T x X G is called the linearizing cone to X at x. The equality T x X G = T x X G is not guaranteed. The constraint function c is said to be qualified for representing X G at x if T x X G = T x X G, c (x) [(T c(x) G) + ] is closed. (3a) (3b) 33 / 110 Optimality conditions First order optimality conditions for (P G ) (NC1) Theorem (NC1 for (P G )) If x is a local minimum of (P G ), if f and c are differentiable at x, and if c is qualified for representing X G at x in the sense (3a)-(3b), then 1 there exists a multiplier λ F such that f(x )+c (x ) λ = 0, λ N c(x )G. (4a) (4b) 2 if, furthermore, G K is a convex cone, then (4b) can be written K λ c(x ) K or c(x ) N λ K. (4c) One recognizes in (4a) the gradient of the Lagrangian wrt x l : E F R : x l(x,λ) = f(x)+ λ,c(x) and in (4c) the complementarity conditions. 34 / 110

16 Optimality conditions First order optimality conditions for (P G ) (SC1 for convex problem) Theorem (SC1 for convex (P G )) If (P G ) is convex, if f and c are differentiable at x X G, and if there is a λ F such that (x,λ ) satisfies (4a)-(4b), then x is a global minimum of (P G ). The goal of the next slides is to highlight and analyze a condition (Robinson s condition) that 1 provides an error bound for the feasible set y +X G (y small), 2 claims the stability of the feasible set X G, 3 ensures that c is qualified for representing X G. 35 / 110 Optimality conditions First order optimality conditions for (P G ) (Robinson s condition) [WP.fr] We say that Robinson s condition holds at x X G if (CQ-R) We will see below that it is useful since it ( ) 0 int c(x)+c (x) E G. (5) provides an error bound for small perturbations y +XG of X G, claims the stability of the feasible set XG with respect to small perturbations, shows that the constraint function c is qualified for representing XG at x, in the sense (3a)-(3b), it generalizes to (P G ) the Mangasarian-Fromovitz constraint qualification (CQ-MF) for (P EI ). 36 / 110

17 Optimality conditions First order optimality conditions for (P G ) (Robinson s error bound I) [20] Theorem ((CQ-R) and metric regularity) If c is continuously differentiable near x 0 X G, then the following properties are equivalent: 1 (CQ-R) holds at x = x 0, 2 there exists a constant µ 0, such that (x,y) near (x 0,0): dist(x,c 1 (y +G)) µ dist(c(x),y +G). (6) Condition 2 is named metric regularity since it is equivalent to that property (see its definition below) for the multifunction T : E F : x c(x) G. 37 / 110 Optimality conditions First order optimality conditions for (P G ) (Robinson s error bound II) [20] An error bound is an estimation of the distance to a set by a quantity easier to compute. Corollary (Robinson s error bound) If c is continuously differentiable near x 0 X G and if (CQ-R) holds at x = x 0, then there exists a constant µ 0, such that x near x 0 : dist(x,x G ) µ dist(c(x),g). (7) dist(x,x G ) is often difficult to evaluate in E, dist(c(x),g) is often easier to evaluate in F (it is the case if c(x) is easy to evaluate and G is simple), useful in theory (e.g., for proving that (CQ-R) implies constraint qualification), in algorithmics (e.g., for proving [speed of] converge) or in practice (for estimating dist(x,x G )). 38 / 110

18 Optimality conditions First order optimality conditions for (P G ) (stability wrt small perturbations) The estimate (6) readily implies the following stability result. Corollary (stability wrt small perturbations) If c is continuously differentiable near some x 0 X G and if (CQ-R) holds at x 0, then, for all small y F: {x E : c(x) y +G}. 39 / 110 Optimality conditions First order optimality conditions for (P G ) (open multifunction theorem I) [25, 22, 5] Let T : E F be a multifunction and B E and B F be the closed balls of E and F. T is open at (x 0,y 0 ) G(T) with ratio ρ > 0 if there exist a neighborhood W of (x 0,y 0 ) and a radius r max > 0 such that for all (x,y) W G(T) and r [0,r max ]: y +ρr B F T(x +r B E ). T is metric regular at (x 0,y 0 ) G(T) with modulus µ > 0 if for all (x,y) near (x 0,y 0 ): dist(x,t 1 (y)) µ dist(y,t(x)). Difference: (x,y) G(T) for the openness, but not for the metric regularity. y slope ρ G(T) T(x) y +ρ(2r) B F y +ρr B F y dist(y,t(x)) r T(x) x x x 0 y 0 slope 1/µ G(T) T 1 (y) dist(x,t 1 (y)) Both estimate the change in T(x) with x (but these are not infinitesimal notions), either from inside G(T) (openness) or outside (metric regularity). 40 / 110

19 Optimality conditions First order optimality conditions for (P G ) (open multifunction theorem II) [25, 22, 5] Extention of the open mapping theorem for linear (continuous) maps to (nonlinear) convex multifunctions. Theorem (open multifunction theorem, finite dimension) If T : E F is convex and (x 0,y 0 ) G(T), then the following properties are equivalent: 1 y 0 intr(t), 2 for all r > 0, y 0 intt(x +r B E ), 3 T is open at (x 0,y 0 ) with rate ρ > 0, 4 T is metric regular at (x 0,y 0 ) with modulus µ > 0. One can take µ = 1/ρ in point 4 if ρ is given by point / 110 Optimality conditions First order optimality conditions for (P G ) (metric regularity diffusion I) [6] ϕ : E R + is subcontinuous if for all sequences {x k } x such that ϕ(x k ) 0, there holds ϕ(x) = 0. ϕ : E R + is r-steep at x E, with r [0,1[, if for all x B(x,ϕ(x)/(1 r)), there exists x B(x,ϕ(x )), such that ϕ(x ) r ϕ(x ). Lemma (Lyusternik [17, 6]) If ϕ : E R + is subcontinuous and r-steep at x domϕ, with r [0,1[, then ϕ has a zero in B(x,ϕ(x)/(1 r)). 42 / 110

20 Optimality conditions First order optimality conditions for (P G ) (metric regularity diffusion II) [6] Theorem (metric regularity diffusion) Suppose that c : E F a continuous function, G a closed convex set of F, T : E F : x c(x) G is µ-metric regular at (x 0,y 0 ) G(T), δ : E F is Lipschitz near x 0 with modulus L < 1/µ, T : E F : x c(x)+δ(x) G. Then T is also metric regular at (x 0,y 0 +δ(x 0 )) G( T) with modulus µ/(1 Lµ): for all (x,y) near (x 0,y 0 +δ(x 0 )), there holds No need of convexity. dist(x, T 1 (y)) µ 1 Lµ dist(y, T(x)). (8) 43 / 110 M.F. Anjos, J.B. Lasserre (2012). Handbook on Semidefinite, Conic and Polynomial Optimization. Springer. [doi]. A. Ben-Tal, A. Nemirovski (2001). Lectures on Modern Convex Optimization Analysis, Algorithms, and Engineering Applications. MPS-SIAM Series on Optimization 2. SIAM. J.F. Bonnans (1994). Local analysis of Newton-type methods for variational inequalities and nonlinear programming. Applied Mathematics and Optimization, 29, [doi]. J.F. Bonnans, J.Ch. Gilbert, C. Lemaréchal, C. Sagastizábal (2006). Numerical Optimization Theoretical and Practical Aspects (second edition). Universitext. Springer Verlag, Berlin. [authors] [editor] [doi]. J.F. Bonnans, A. Shapiro (2000). Perturbation Analysis of Optimization Problems. Springer Verlag, New York. R. Cominetti (1990). Metric regularity, tangent sets, and second-order optimality conditions. Applied Mathematics and Optimization, 21(1), [doi]. A.L. Dontchev, R.T. Rockafellar (2009). Implicit Functions and Solution Mappings A View from Variational Analysis. Springer Monographs in Mathematics. Springer. F. Facchinei, J.-S. Pang (2003). Finite-Dimensional Variational Inequalities and Complementarity Problems (two volumes). Springer Series in Operations Research. Springer. J.Ch. Gilbert (2015). Éléments d Optimisation Différentiable Théorie et Algorithmes. Syllabus de cours à l ENSTA, Paris. [internet]. S.-P. Han (1976). Superlinearly convergent variable metric algorithms for general nonlinear programming problems. Mathematical Programming, 11, S.-P. Han (1977). A globally convergent method for nonlinear programming. Journal of Optimization Theory and Applications, 22, A.F. Izmailov, M.V. Solodov (2014). Newton-Type Methods for Optimization and Variational Problems. 110 / 110

21 Springer Series in Operations Research and Financial Engineering. Springer. N.H. Josephy (1979). Newton s method for generalized equations. Technical Summary Report 1965, Mathematics Research Center, University of Wisconsin, Madison, WI, USA. N.H. Josephy (1979). Quasi-Newton s method for generalized equations. Summary Report 1966, Mathematics Research Center, University of Wisconsin, Madison, WI, USA. D. Klatte, B. Kummer (2002). Nonsmooth Equations in Optimization - Regularity, Calculus, Methods and Applications. Kluwer Academic Publishers. [doi]. B. Kummer (1988). Newton s method for nondifferentiable functions. In J. Guddat, B. Bank, H. Hollatz, P. Kall, D. Klatte, B. Kummer, K. Lommatzsch, L. Tammer, M. Vlach, K. Zimmerman (editors), Advances in Mathematical Optimization, pages Akademie-Verlag, Berlin. L. Lyusternik (1934). Conditional extrema of functionals. Math. USSR-Sb., 41, L. Qi (1993). Convergence analysis of some algorithms for solving nonsmooth equations. Mathematics of Operations Research, 18, L. Qi, J. Sun (1993). A nonsmooth version of Newton s method. Mathematical Programming, 58, [doi]. S.M. Robinson (1976). Stability theory for systems of inequalities, part II: differentiable nonlinear systems. SIAM Journal on Numerical Analysis, 13, S.M. Robinson (1982). Generalized equations and their solutions, part II: applications to nonlinear programming. Mathematical Programming Study, 19, R.T. Rockafellar, R. Wets (1998). Variational Analysis. Grundlehren der mathematischen Wissenschaften 317. Springer. M. Ulbrich (2011). Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces. MPS-SIAM Series on Optimization 11. SIAM Publications, Philadelphia, PA, USA. [doi]. H. Wolkowicz, R. Saigal, L. Vandenberghe (editors) (2000). 110 / 110 Handbook of Semidefinite Programming Theory, Algorithms, and Applications, volume 27 of International Series in Operations Research & Management Science. Kluwer Academic Publishers. J. Zowe, S. Kurcyusz (1979). Regularity and stability for the mathematical programming problem in Banach spaces. Applied Mathematics and Optimization, 5(1), / 110

Advanced Continuous Optimization

Advanced Continuous Optimization University Paris-Saclay Master Program in Optimization Advanced Continuous Optimization J. Ch. Gilbert (INRIA Paris-Rocquencourt) October 17, 2017 Lectures: September 18, 2017 November 6, 2017 Examination:

More information

Advanced Continuous Optimization

Advanced Continuous Optimization Advanced Continuous Optimization Jean Charles Gilbert To cite this version: Jean Charles Gilbert. Advanced Continuous Optimization. Master. France. 2015. HAL Id: cel-01249369 https://hal.inria.fr/cel-01249369

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

1. Introduction. Consider the following parameterized optimization problem:

1. Introduction. Consider the following parameterized optimization problem: SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 940 946, November 1998 004 NONDEGENERACY AND QUANTITATIVE STABILITY OF PARAMETERIZED OPTIMIZATION PROBLEMS WITH MULTIPLE

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and

More information

Perturbation Analysis of Optimization Problems

Perturbation Analysis of Optimization Problems Perturbation Analysis of Optimization Problems J. Frédéric Bonnans 1 and Alexander Shapiro 2 1 INRIA-Rocquencourt, Domaine de Voluceau, B.P. 105, 78153 Rocquencourt, France, and Ecole Polytechnique, France

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No. 3, August 2011, pp. 568 592 issn 0364-765X eissn 1526-5471 11 3603 0568 doi 10.1287/moor.1110.0506 2011 INFORMS Convergence of Stationary Points of Sample

More information

Metric regularity and systems of generalized equations

Metric regularity and systems of generalized equations Metric regularity and systems of generalized equations Andrei V. Dmitruk a, Alexander Y. Kruger b, a Central Economics & Mathematics Institute, RAS, Nakhimovskii prospekt 47, Moscow 117418, Russia b School

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

On constraint qualifications with generalized convexity and optimality conditions

On constraint qualifications with generalized convexity and optimality conditions On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 23, No., pp. 256 267 c 203 Society for Industrial and Applied Mathematics TILT STABILITY, UNIFORM QUADRATIC GROWTH, AND STRONG METRIC REGULARITY OF THE SUBDIFFERENTIAL D. DRUSVYATSKIY

More information

The sum of two maximal monotone operator is of type FPV

The sum of two maximal monotone operator is of type FPV CJMS. 5(1)(2016), 17-21 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 The sum of two maximal monotone operator is of type

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Sum of two maximal monotone operators in a general Banach space is maximal

Sum of two maximal monotone operators in a general Banach space is maximal arxiv:1505.04879v1 [math.fa] 19 May 2015 Sum of two maximal monotone operators in a general Banach space is maximal S R Pattanaik, D K Pradhan and S Pradhan May 20, 2015 Abstract In a real Banach space,

More information

Conditioning of linear-quadratic two-stage stochastic programming problems

Conditioning of linear-quadratic two-stage stochastic programming problems Conditioning of linear-quadratic two-stage stochastic programming problems W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch (K. Emich, R. Henrion) (WIAS

More information

Corrections and additions for the book Perturbation Analysis of Optimization Problems by J.F. Bonnans and A. Shapiro. Version of March 28, 2013

Corrections and additions for the book Perturbation Analysis of Optimization Problems by J.F. Bonnans and A. Shapiro. Version of March 28, 2013 Corrections and additions for the book Perturbation Analysis of Optimization Problems by J.F. Bonnans and A. Shapiro Version of March 28, 2013 Some typos in the book that we noticed are of trivial nature

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

Lectures on Parametric Optimization: An Introduction

Lectures on Parametric Optimization: An Introduction -2 Lectures on Parametric Optimization: An Introduction Georg Still University of Twente, The Netherlands version: March 29, 2018 Contents Chapter 1. Introduction and notation 3 1.1. Introduction 3 1.2.

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems Int. Journal of Math. Analysis, Vol. 7, 2013, no. 60, 2995-3003 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2013.311276 On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective

More information

CONVERGENCE OF INEXACT NEWTON METHODS FOR GENERALIZED EQUATIONS 1

CONVERGENCE OF INEXACT NEWTON METHODS FOR GENERALIZED EQUATIONS 1 CONVERGENCE OF INEXACT NEWTON METHODS FOR GENERALIZED EQUATIONS 1 Asen L. Dontchev Mathematical Reviews, Ann Arbor, MI 48107-8604 R. T. Rockafellar Department of Mathematics, University of Washington,

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Aubin Property and Uniqueness of Solutions in Cone Constrained Optimization

Aubin Property and Uniqueness of Solutions in Cone Constrained Optimization Preprint manuscript No. (will be inserted by the editor) Aubin Property and Uniqueness of Solutions in Cone Constrained Optimization Diethard Klatte Bernd Kummer submitted 6 January 2012, revised 11 June

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Radius Theorems for Monotone Mappings

Radius Theorems for Monotone Mappings Radius Theorems for Monotone Mappings A. L. Dontchev, A. Eberhard and R. T. Rockafellar Abstract. For a Hilbert space X and a mapping F : X X (potentially set-valued) that is maximal monotone locally around

More information

arxiv: v1 [math.oc] 13 Mar 2019

arxiv: v1 [math.oc] 13 Mar 2019 SECOND ORDER OPTIMALITY CONDITIONS FOR STRONG LOCAL MINIMIZERS VIA SUBGRADIENT GRAPHICAL DERIVATIVE Nguyen Huy Chieu, Le Van Hien, Tran T. A. Nghia, Ha Anh Tuan arxiv:903.05746v [math.oc] 3 Mar 209 Abstract

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS 1 DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS Alexander Shapiro 1 School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA, E-mail: ashapiro@isye.gatech.edu

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Strong duality in Lasserre s hierarchy for polynomial optimization

Strong duality in Lasserre s hierarchy for polynomial optimization Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs Fanwen Meng,2 ; Gongyun Zhao Department of Mathematics National University of Singapore 2 Science

More information

NUMERICAL OPTIMIZATION. J. Ch. Gilbert

NUMERICAL OPTIMIZATION. J. Ch. Gilbert NUMERICAL OPTIMIZATION J. Ch. Gilbert Numerical optimization (past) The discipline deals with the classical smooth (nonconvex) problem min {f(x) : c E (x) = 0, c I (x) 0}. Applications: variable added

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function arxiv:1805.03847v1 [math.oc] 10 May 2018 Vsevolod I. Ivanov Department of Mathematics, Technical

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems Math. Program., Ser. A (2013) 142:591 604 DOI 10.1007/s10107-012-0586-z SHORT COMMUNICATION A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information

Characterizations of Strong Regularity for Variational Inequalities over Polyhedral Convex Sets

Characterizations of Strong Regularity for Variational Inequalities over Polyhedral Convex Sets Characterizations of Strong Regularity for Variational Inequalities over Polyhedral Convex Sets A. L. Dontchev Mathematical Reviews, Ann Arbor, MI 48107 and R. T. Rockafellar Dept. of Math., Univ. of Washington,

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Aubin Criterion for Metric Regularity

Aubin Criterion for Metric Regularity Journal of Convex Analysis Volume 13 (2006), No. 2, 281 297 Aubin Criterion for Metric Regularity A. L. Dontchev Mathematical Reviews, Ann Arbor, MI 48107, USA ald@ams.org M. Quincampoix Laboratoire de

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Generic identifiability and second-order sufficiency in tame convex optimization

Generic identifiability and second-order sufficiency in tame convex optimization Generic identifiability and second-order sufficiency in tame convex optimization J. Bolte, A. Daniilidis, and A. S. Lewis Date: 19th January 2009 Abstract We consider linear optimization over a fixed compact

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 13, No. 2, pp. 603 618 c 2002 Society for Industrial and Applied Mathematics ON THE CALMNESS OF A CLASS OF MULTIFUNCTIONS RENÉ HENRION, ABDERRAHIM JOURANI, AND JIŘI OUTRATA Dedicated

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1 Applied Mathematical Sciences, Vol. 7, 2013, no. 58, 2841-2861 HIKARI Ltd, www.m-hikari.com Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1 Christian Sommer Department

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

for all u C, where F : X X, X is a Banach space with its dual X and C X

for all u C, where F : X X, X is a Banach space with its dual X and C X ROMAI J., 6, 1(2010), 41 45 PROXIMAL POINT METHODS FOR VARIATIONAL INEQUALITIES INVOLVING REGULAR MAPPINGS Corina L. Chiriac Department of Mathematics, Bioterra University, Bucharest, Romania corinalchiriac@yahoo.com

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information