McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Function
|
|
- William Cross
- 6 years ago
- Views:
Transcription
1 Introduction Relaxations Rate Multivariate Relaxations Conclusions McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Function Alexander Mitsos Systemverfahrenstecnik Aachener Verfahrenstechnik RWTH Aachen University February, 2015 Based on work performed with Agustin Bompadre and Angelos Tsoukalas 1/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
2 Introduction Relaxations Rate Multivariate Relaxations Conclusions Outline Introduction AVT Intro Motivation Composition Theorem Relaxations Rate Interval Analysis Relaxations of Functions Limits for Convergence Order McCormick Relaxations Examples Numerical Examples Multivariate Relaxations Reinterpretation Theory Examples Conclusions 2/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
3 Introduction Relaxations Rate Multivariate Relaxations Conclusions AVT Intro RWTH Facts ,000 BS/MS students over 7,000 international students from 120 countries over 12,000 in department of mechanical engineering 512 Professors 4,700 academic staff 2,600 technical & administrative staff Approximately equal base-funding (state government) and competitive grants (state, federal, European, industrial) Research university with national and international collaborations Student exchanges across the globe In addition to departments 8 profile areas : Computational Science & Engineering; Energy, Chemical & Process Engineering, Information & Communication Technology, Material Science & Engineering, Medical Science & Technology, Molecular Science & Engineering, Mobility & Transport Engineering, Production Engineering 3/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
4 Introduction Relaxations Rate Multivariate Relaxations Conclusions AVT Intro AVT: Chemical Engineering at RWTH End of 2016: common building and research lab (60 mio egrant) including laboratory space, modular biorefinery pilot plant, process analytics,... Close collaboration with energy engineering, technical chemistry, biology, Jülich research center,... 4/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
5 Introduction Relaxations Rate Multivariate Relaxations Conclusions AVT Intro AVT Professors 5/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
6 Introduction Relaxations Rate Multivariate Relaxations Conclusions AVT Intro Process Systems Engineering at RWTH 40 research, technical & administrative staff, responsible for 10 courses and 2000 exams/year 6/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
7 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation (Need for) Deterministic Global Optimization Physics-based models very often involve nonconvex functions Nonconvex objective function and feasible set Gradient-based solvers terminate (at best) with a local optimum May fail to find a feasible point Heuristic methods (e.g., gradient-free optimization algorithms) offer global optimization only in a probabilistic limit and without any rigorous termination criterion or guarantee Engineer desires best-possible (or near-optimal) solution, and the knowledge that this has been achieved Particular interests: bilevel/semi-infinite programs (lower-level program must be solved to global optimality), energy systems, heliostat placement 7/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
8 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Heliostat Layout for Solar Thermal Noone, Torrilhon and Mitsos, Solar Energy 2012 Optimize heliostat layout for hillside and planar sites Biomimetic pattern drastically improves existing patterns efficiency increase by 0.5% and area decrease area by 20% (mio$ of savings per plant, bio$ total) heuristic discovery enabled by local optimization Global optimization required due to multiple local optima: Swapping of heliostats Nonconvex heliostat interaction Goal: maximal performance and reasonable upper bound 8/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
9 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Heliostat Layout for Solar Thermal Noone, Torrilhon and Mitsos, Solar Energy 2012 Optimize heliostat layout for hillside and planar sites Biomimetic pattern drastically improves existing patterns efficiency increase by 0.5% and area decrease area by 20% (mio$ of savings per plant, bio$ total) heuristic discovery enabled by local optimization Global optimization required due to multiple local optima: Swapping of heliostats Nonconvex heliostat interaction Goal: maximal performance and reasonable upper bound 8/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
10 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Heliostat Layout for Solar Thermal Noone, Torrilhon and Mitsos, Solar Energy 2012 Optimize heliostat layout for hillside and planar sites Biomimetic pattern drastically improves existing patterns efficiency increase by 0.5% and area decrease area by 20% (mio$ of savings per plant, bio$ total) heuristic discovery enabled by local optimization Global optimization required due to multiple local optima: Swapping of heliostats Nonconvex heliostat interaction Goal: maximal performance and reasonable upper bound 8/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
11 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Heliostat Layout for Solar Thermal Noone, Torrilhon and Mitsos, Solar Energy 2012 Optimize heliostat layout for hillside and planar sites Biomimetic pattern drastically improves existing patterns efficiency increase by 0.5% and area decrease area by 20% (mio$ of savings per plant, bio$ total) heuristic discovery enabled by local optimization Global optimization required due to multiple local optima: Swapping of heliostats Nonconvex heliostat interaction Goal: maximal performance and reasonable upper bound 8/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
12 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Heliostat Layout for Solar Thermal Noone, Torrilhon and Mitsos, Solar Energy 2012 Optimize heliostat layout for hillside and planar sites Biomimetic pattern drastically improves existing patterns efficiency increase by 0.5% and area decrease area by 20% (mio$ of savings per plant, bio$ total) heuristic discovery enabled by local optimization Global optimization required due to multiple local optima: Swapping of heliostats Nonconvex heliostat interaction Goal: maximal performance and reasonable upper bound 8/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015 (South) Y Position [m] (North) (West) X Position [m] (East)
13 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
14 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
15 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
16 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
17 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
18 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
19 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
20 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
21 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
22 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization: Branch-and-Bound 1. Solve relaxation LBD 5. Repeat steps for each node 2. Solve original locally UBD 6. Fathom by value dominance 3. Branch to nodes (a) and (b) 7. (Range reduction of variables) 9/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
23 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization needs Convex Relaxations For simple functions often available, e.g.,: For elementary functions we have envelopes Secant for univariate concave functions Method for all univariate functions (Maranas and Floudas 95 [15]) Envelope available for x 1 x 2 (McCormick 76) For x1 x 2 closed-form relaxations (Grossmann... ) and envelope as SDP (Tawarmalani & Sahinidis 2001 [30]) 10/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
24 Introduction Relaxations Rate Multivariate Relaxations Conclusions Motivation Global Optimization needs Convex Relaxations For simple functions often available, e.g.,: For elementary functions we have envelopes Secant for univariate concave functions Method for all univariate functions (Maranas and Floudas 95 [15]) Envelope available for x 1 x 2 (McCormick 76) For x1 x 2 closed-form relaxations (Grossmann... ) and envelope as SDP (Tawarmalani & Sahinidis 2001 [30]) For complicated functions Use second-order information to directly compute relaxations (αbb, γbb, etc) (Androulakis et al. 95 [8],... ) Propagate McCormick relaxations for factorable functions A function is factorable if it is defined by a finite recursive composition of binary sums, binary products, and a given library of univariate intrinsic functions Applicable also for algorithms (Mitsos et al [19]) Auxiliary Variable Reformulation (Smith and Pantelides 1997 [29], BARON); 10/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
25 Introduction Relaxations Rate Multivariate Relaxations Conclusions Composition Theorem McCormick Composition [Math. Prog. 1976] Theorem Let Z R n and X R, g = F f ( ) where f : Z R, F : X R and let f (Z) X. Suppose that convex/concave relaxations f cv, f cc : Z R of f on Z are known. Let F cv : X R be a convex relaxation of F on X and let x min X be a point where F cv attains its minimum on X. Then ḡ cv : Z R, ḡ cv (z) = F cv ( mid{f cv (z), f cc (z), x min } ), where mid(,, ) gives the median value of three real numbers, is a convex relaxation of g on Z. In general nonsmooth F is univariate, not obvious how to generalize to multiple dimensions What are convergence properties? 11/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
26 Introduction Relaxations Rate Multivariate Relaxations Conclusions Composition Theorem Convergence in Deterministic Global Optimization Key for optimality guarantee: converging lower bound by relaxation Interval-based vs. convex relaxations Natural interval extensions, Taylor models McCormick, αbb, γbb, auxiliary variables, linearization,... Convergence by partitioning the host set into progressively smaller boxes Branching, subdivision, piecewise relaxations Fast convergence essential Cluster effect for low convergence order (Du and Kearfott 1994 [9], Neumaier 2004 [23]] Partitioning leads to exponential complexity Increasing problem size results in weaker relaxations Convergence order established for interval methods and αbb 12/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
27 Introduction Relaxations Rate Multivariate Relaxations Conclusions Composition Theorem Goals Development of sharp bounds for convergence order for McCormick relaxations (McCormick 1976 [16]) Performed by Bompadre and Mitsos JOGO 2012 [21] Developed also for McCormick-Taylor models (Sahlodin and Chachuat [25, 24]) in Bompadre et al [22] Extension to multivariate outer function (Tsoukalas and Mitsos 2012 [31] Results in tighter relaxations Is a tool for theorem proofs Increases understanding 13/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
28 Introduction Relaxations Rate Multivariate Relaxations Conclusions Interval Analysis Definitions from Interval Analysis, e.g., [20]) Definition (Intervals and Diameter of Set) An interval Y R n is a set Y = [x 1, y 1 ] [x n, y n ]. The diameter of Y is defined as δ(y ) = max{ y i x i : 1 i n}. The set of all intervals of R n is denoted by IR n. Definition (Inclusion Function) Given f : Z R n R, an inclusion function of f on Z, H f : {Y IR n : Y Z} IR estimates the range of f : [ inf y Y f (y), sup y Y f (y)] H f (Y ), Y IR n, Y Z. Definition (Convergence Order) H f has convergence order β if C > 0 s.t. Y IR n, Y Z: max{ inf f (y) inf H f (Y ), sup H f (Y ) sup f (y)} Cδ(Y ) β y Y y Y 14/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
29 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Scheme of Estimators Definition (Scheme of Estimators) Given f : Z R n R, a scheme of estimators of f is a set of functions (fy u, f Y o) Y IR n,y Z s.t., Y IR n, Y Z, fy u (fy o ) is a convex / concave relaxation of f on Y with the associated inclusion function: H f (Y ) = [inf y Y fy u(y), sup y Y fy o(y)] R H f (Y) f(y) f o f u f Y 15/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
30 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Scheme of Estimators Definition (Scheme of Estimators) Given f : Z R n R, a scheme of estimators of f is a set of functions (fy u, f Y o) Y IR n,y Z s.t., Y IR n, Y Z, fy u (fy o ) is a convex / concave relaxation of f on Y with the associated inclusion function: H f (Y ) = [inf y Y fy u(y), sup y Y fy o(y)] R H f (Y) f(y) f o f u f Y Definition (Convergence Order of Schemes) A scheme (fy u, f Y o) Y IR n,y Z has (Hausdorff) convergence of order β if its associated inclusion function has convergence order β. It has pointwise convergence of order γ if there exists C > 0 s.t. Y IR n, Y Z: sup y Y {f (y) fy u(y); f Y o (y) f (y)} Cδ(Y )γ 15/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
31 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Hausdorff versus Pointwise Convergence Theorem (Pointwise stronger than Hausdorff) A scheme of estimators of f, (f u Y, f o Y ) Y IR n,y Z, with pointwise convergence of order γ has Hausdorff convergence of order β γ. 16/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
32 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Hausdorff versus Pointwise Convergence Theorem (Pointwise stronger than Hausdorff) A scheme of estimators of f, (f u Y, f o Y ) Y IR n,y Z, with pointwise convergence of order γ has Hausdorff convergence of order β γ. Proof idea: Let z Y arg min y Y fy u (y); then: 0 inf f (Y ) inf f u (Y ) = inf f (Y ) f u ( z Y ) f ( z Y ) f u ( z Y ) Cδ(Y ) γ. 16/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
33 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Hausdorff versus Pointwise Convergence Theorem (Pointwise stronger than Hausdorff) A scheme of estimators of f, (f u Y, f o Y ) Y IR n,y Z, with pointwise convergence of order γ has Hausdorff convergence of order β γ. Proof idea: Let z Y arg min y Y fy u (y); then: 0 inf f (Y ) inf f u (Y ) = inf f (Y ) f u ( z Y ) f ( z Y ) f u ( z Y ) Cδ(Y ) γ. Inequality can be strict, e.g., Z = [ 1, 1], f (z) = z, fy u(z) = z Y L, fy o(z) = z Y U ; scheme exact in Hausdorff metric (β = ), but linear pointwise convergence order (γ = 1) f f Y u f Y o /33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
34 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Limits on Pointwise Convergence Theorem (Pointwise Convergence at Most Quadratic) Let f be a nonlinear, C 2 function. Then, a scheme of estimators of f cannot have pointwise convergence of order greater than two Proof idea: either the convex underestimator fy u or the concave overestimator replicates the curvature of the function f, but not both at the same time f o Y 17/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
35 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Limits on Pointwise Convergence Theorem (Pointwise Convergence at Most Quadratic) Let f be a nonlinear, C 2 function. Then, a scheme of estimators of f cannot have pointwise convergence of order greater than two Proof idea: either the convex underestimator fy u or the concave overestimator replicates the curvature of the function f, but not both at the same time f o Y Theorem (Quadratic Convergence of Envelopes) Let f be a C 2 function. Then, the scheme associated to the convex and concave envelopes has quadratic convergence in the pointwise metric and at least quadratic convergence in the Hausdorff metric. Proof idea: show that a particular scheme has quadratic pointwise convergence; envelopes are by definition at least as tight and thus converge at least as fast; Hausdorff metric follows trivially 17/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
36 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions αbb Relaxations Floudas et al. 1990s+ [12, 13, 14, 15, 8, 4, 2, 3, 1, 6, 5, 10, 11] Theorem (Basic αbb Scheme has Quadratic Pointwise Convergence) Let f : Z R be a C 2 with the Hessian matrix H. Select α > 0, s.t., x Z: H ± 2αI is positive/negative semi-definite. For Y = [zy L,1, zu Y,1 ]... [zy L,n, zu Y,n ] Z, let f Y u, f Y o : Y R f u/o Y (z) = f (z) ± α n (z i zy L,i)(z i zy U,i) i=1 18/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
37 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions αbb Relaxations Floudas et al. 1990s+ [12, 13, 14, 15, 8, 4, 2, 3, 1, 6, 5, 10, 11] Theorem (Basic αbb Scheme has Quadratic Pointwise Convergence) Let f : Z R be a C 2 with the Hessian matrix H. Select α > 0, s.t., x Z: H ± 2αI is positive/negative semi-definite. For Y = [zy L,1, zu Y,1 ]... [zy L,n, zu Y,n ] Z, let f Y u, f Y o : Y R f u/o Y (z) = f (z) ± α n (z i zy L,i)(z i zy U,i) The corresponding scheme has pointwise convergence of order two (γ = 2). i=1 18/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
38 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions αbb Relaxations Floudas et al. 1990s+ [12, 13, 14, 15, 8, 4, 2, 3, 1, 6, 5, 10, 11] Theorem (Basic αbb Scheme has Quadratic Pointwise Convergence) Let f : Z R be a C 2 with the Hessian matrix H. Select α > 0, s.t., x Z: H ± 2αI is positive/negative semi-definite. For Y = [zy L,1, zu Y,1 ]... [zy L,n, zu Y,n ] Z, let f Y u, f Y o : Y R f u/o Y (z) = f (z) ± α n (z i zy L,i)(z i zy U,i) The corresponding scheme has pointwise convergence of order two (γ = 2). Proof elementary Hausdorff quadratic convergence order follows trivially (β = 2) i=1 Flip-side: relaxations quadratically weaker for growing host set, see also Maranas and Floudas 1994 [14] Variable α results in tighter relaxations 18/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
39 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions McCormick Relaxations of Functions McCormick 1976 [16, 17] proposed a method to generate convex underestimators and concave overestimators of factorable functions The method computes relaxations of sum, product, and composition of two functions from the relaxations of the two functions (factors) Applied recursively to finite number of factors McCormick relaxations can be extended to dynamic systems Barton and coworkers [26, 28, 27], Sahlodin and Chachuat [25]; to optimization with algorithms embedded Mitsos et al. SIOPT 2009 [19]; and discontinuous functions Wechsung and Barton We will restate McCormick Relaxations as schemes of estimators, without assuming envelopes of the factors 19/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
40 Introduction Relaxations Rate Multivariate Relaxations Conclusions Relaxations of Functions Convergence of McCormick Relaxations Sharp Bounds Convergence order of factors Resulting convergence order Addition scheme for f i has β i no order propagation, g(z) = f 1 (z) + f 2 (z) β = 1 possible for β i scheme for f i has γ i > 0 γ = min{γ 1, γ 2 } Multiplication scheme for f i has β i no order propagation, g(z) = f 1 (z) f 2 (z) β = 1 possible for β i scheme for f i has γ i 1 γ = min{γ 1, γ 2, 2} inclusions for f i have β i,t 1 Composition inclusion of f has β f,t 1, β = min{β f,t, β F } g(z) = F (f (z)) scheme for F has β F scheme for f has γ f, γ = min{γ f, γ F } inclusion for f has β f,t 1, scheme for F has β F Factors (i = 1, 2, f, F ) characterized by convergence order (β i in Hausdorff metric and/or γ i pointwise). Subscript T denotes the inclusion function used to overestimate the range. Convergence order of the resulting scheme characterized by β and/or γ. 20/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
41 Introduction Relaxations Rate Multivariate Relaxations Conclusions Examples Product Example: Hausdorff Convergence Approximation error of relaxations Natural interval extensions McCormick relaxations α-bb relaxations with fixed α α-bb relaxations with variable α Interval half-width ε Let Z = [0.3, 0.7] and Y Z, s.t. Y = [0.5 ε, ε] and f : Z R, f (z) = (z z 2 ) (log(z) + exp( z)). (z z 2 ) based on [7]: linear convergence order for natural interval extensions 21/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
42 Introduction Relaxations Rate Multivariate Relaxations Conclusions Examples Univariate Composition: Hausdorff Convergence Let Z = [ 1, 1] and Y Z, s.t. Y = [ ε, ε] and consider f : Z R, f (z) = exp(1 z 2 ). Propagate relaxations assuming exp((1 z)(1 + z)) Mimicks effect of propagating relaxations Approximation error of relaxations Natural interval extensions McCormick relaxations α-bb relaxations with fixed α α-bb relaxations with variable α Relaxation as a Function of Interval Interval half-width ε 22/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
43 Introduction Relaxations Rate Multivariate Relaxations Conclusions Examples Phase Split of Toluene-Water-Aniline Convergence in Hausdorff metric of LBDing Problem in Mitsos and Barton 2007 [18] 35 Natural interval extensions McCormick relaxations α-bb relaxations with fixed α 30 α-bb relaxations with variable α Approximation error of relaxations Interval half-width ε f (x) = x 1 ln(x 1) + x 2 ln(x 2) + (1 x 1 x 2) ln(1 x 1 x 2) + x 1 τ 21e α 12 τ 21 x 2 + τ 31e α 13 τ 31 (1 x 1 x 2) x 1 + e α 12 τ 21 x 2 + e α 13 τ 31 (1 x 1 x 2) + + λ1(x 0 1 x1) + λ2(x 0 2 x2). 23/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
44 Introduction Relaxations Rate Multivariate Relaxations Conclusions Reinterpretation McCormick Composition Revisited F cv ( mid{f cv (z), f cc (z), x min } ) 1. = F cv (x min ) 2. = F cv (f cv (z)) 3. = F cv (f cc (z)) F cv x min x 24/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
45 Introduction Relaxations Rate Multivariate Relaxations Conclusions Reinterpretation McCormick Composition Revisited F cv ( mid{f cv (z), f cc (z), x min } ) 1. = F cv (x min ) 2. = F cv (f cv (z)) 3. = F cv (f cc (z)) f cv (z) F cv f cc (z) x min x 24/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
46 Introduction Relaxations Rate Multivariate Relaxations Conclusions Reinterpretation McCormick Composition Revisited F cv ( mid{f cv (z), f cc (z), x min } ) 1. = F cv (x min ) 2. = F cv (f cv (z)) 3. = F cv (f cc (z)) F cv f cv (z) f cc (z) x min x 24/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
47 Introduction Relaxations Rate Multivariate Relaxations Conclusions Reinterpretation McCormick Composition Revisited F cv ( mid{f cv (z), f cc (z), x min } ) 1. = F cv (x min ) 2. = F cv (f cv (z)) 3. = F cv (f cc (z)) f cv (z) f cc (z) F cv x min x 24/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
48 Introduction Relaxations Rate Multivariate Relaxations Conclusions Reinterpretation McCormick Composition Revisited F cv ( mid{f cv (z), f cc (z), x min } ) = min x X {F cv (x) f cv (z) x f cc (z)} 1. = F cv (x min ) 2. = F cv (f cv (z)) 3. = F cv (f cc (z)) f cv (z) f cc (z) F cv x min x 24/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
49 Introduction Relaxations Rate Multivariate Relaxations Conclusions Reinterpretation McCormick Composition Revisited F cv ( mid{f cv (z), f cc (z), x min } ) = min x X {F cv (x) f cv (z) x f cc (z)} 1. = F cv (x min ) 2. = F cv (f cv (z)) 3. = F cv (f cc (z)) f cv (z) f cc (z) F cv The min form can be naturally extended to multivariate outer function x min x 24/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
50 Introduction Relaxations Rate Multivariate Relaxations Conclusions Theory Convex Relaxation of Multivariate Composition Multivariate McCormick Let Z R n and X R, g = F f( ) where f : Z R m, F : X R and let f(z) X R m. Then g cv is a convex relaxation of g on Z g cv (z) = min x X F cv (x) s.t. f cv (z) x i f cc (z) i g is relaxation since x i = f i (z) is feasible g cv (z) = min x X F cv (f(z)) = g(z) i g cv is convex as the composition of a convex and increasing function with convex functions. g cv (z) = h(f cv 1 (z),, f cv m (z), f cc 1 (z),, f cv m (z)). The perturbation function h(y cv, y cc ) = is convex and increasing. min x X R m{f cv (x) x y cv, x y cc } 25/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
51 Introduction Relaxations Rate Multivariate Relaxations Conclusions Theory Simpler Expressions under Monotonicity of F cv Corollary If 1. F cv is monotonic increasing then is a convex relaxation of g. 2. F cv is monotonic decreasing then is a convex relaxation of g. g cv (z) = F cv (f cv 1 (z),.., f cv m (z)) g cv (z) = F cv (f cc 1 (z),.., f cc m (z)) 26/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
52 Introduction Relaxations Rate Multivariate Relaxations Conclusions Theory Subdifferential of g cv (z): Theorem Subgradients of g cv are given by multiplication of the Lagrangian multipliers of the problem defining g cv with the corresponding subgradients of the relaxations of the inner functions fi cc, fi cv. Theorem The subdifferential of g cv at z is given by where g cv (x) = Λ(z) = arg { m max i=1 ρcv i (λ cv,λ cc ) 0 { s cv i ρ cc i si cc : min F cv (x) + x X m i=1 (ρ cv 1,, ρcv m, ρ cc 1,, ρcc m ) Λ(z), si cv fi cv (z), si cc fi cc (z) i λ cv i ( x + f cv i (z)) + λ cc i (x fi cc (z)) }, } 27/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
53 Introduction Relaxations Rate Multivariate Relaxations Conclusions Theory Subdifferential of g cv (z): Proof outline Using the representation g cv (z) = h(f cv 1 (z),, f cv m (z), f cc 1 (z),, f cv m (z)) : Strong duality: Subgradients of the perturbation function h correspond to optimal solutions of the dual of the problem defining h at the point cv (z),, fm (z), f cc cv (z),, fm (z)). (f cv 1 1 Then theorem directly follows from subdifferential rule for post-composition with an increasing convex function of several variables. (Hiriart-Urruty and Lemarechal) 28/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
54 Introduction Relaxations Rate Multivariate Relaxations Conclusions Examples Bilinear Product: f 1 (z) f 2 (z) = mult(f 1 (z), f 2 (z)) The convex envelopes of mult(, ) on [x L 1, x U 1 ] [x L 2, x U 2 ] is mult cv = max { x U 2 x 1 + x U 1 x 2 x U 1 x U 2, x L 2 x 1 + x L 1 x 2 x L 1 x L 2 }. The convex relaxation of g that McCormick proposed in 1976 is a closed form solution with min, max, given as an independent rule from composition theorem. New Relaxation as special case of multivariate composition g cv (z) = min x i [fi L,fi U ] s.t. max { f2 U x 1 + f1 U x 2 f1 U f2 U, f2 L x 1 + f1 L x 2 f1 L f2 L } f1 cv (z) x 1 f1 cc (z) f2 cv (z) x 2 f2 cc (z) can also be written in closed form (again with min, max); tighter relaxations than McCormick rule 29/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
55 Introduction Relaxations Rate Multivariate Relaxations Conclusions Examples Product Example: x 3 = x x 2 = mult(x, x 2 ) f is nonconvex; fcvmc original McCormick relaxation; fcvnew new relaxation: not envelope but substantially tighter 8 6 f(x) fcvmc(x) fcvnew(x) /33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
56 Introduction Relaxations Rate Multivariate Relaxations Conclusions Examples Relaxation of min(f 1 (z), f 2 (z)) Only existing relaxation as min (f 1 (z), f 2 (z)) = 1 2 (f 1(z) + f 2 (z) f 1 (z) f 2 (z) ) Multivariate theorem together with (developed) relaxation of min(x, y) results in tighter estimators min(x 2,x) Proposed underestimator abs underestimator z 31/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
57 Introduction Relaxations Rate Multivariate Relaxations Conclusions Examples Fractional div(f 1 (z), f 2 (z)) = f 1(z) f 2 (z) Relaxations in McCormick framework only using f 1 (z) 1 f 2(z) Multivariate theorem together with tight relaxations for div(x, y) = x y result in tighter relaxations than McCormick x/x=div(x,x) Multivariate composition using envelope div cv,env Multivariate composition using linear estimator div cv,lin Univariate McCormick relaxation z 32/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
58 Introduction Relaxations Rate Multivariate Relaxations Conclusions Conclusions Developed sharp bounds for the convergence rates of McCormick relaxations: propagation of quadratic pointwise convergence order iff either of the two conditions holds: 1. quadratic pointwise convergence order of the factor relaxations 2. quadratically convergent interval inclusions are used Only in former case guarantee for higher convergence order than interval extensions Numerical comparisons of McCormick and αbb relaxations McCormick relaxations seem tighter than αbb relaxations for big domains, but looser for small domains Extended McCormick s framework to multiple dimensions and provided subgradients Multivariate McCormick result in tighter relaxations for several important functions, including product f 1 (z) f 2 (z), fractional terms f1(z) f, 2(z) min(f 1 (z), f 2 (z)), max(f 1 (z), f 2 (z)),... Multivariate McCormick can be interpreted as decomposition method for auxiliary variable method 33/33 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
59 References I [1] C. S. Adjiman, I. P. Androulakis, and C. A. Floudas. A global optimization method, αbb, for general twice-differentiable constrained NLPs - II. Implementation and computational results. Computers & Chemical Engineering, 22(9): , [2] C. S. Adjiman, I. P. Androulakis, C. D. Maranas, and C. A. Floudas. A global optimization method, αbb, for process design. Computers & Chemical Engineering, 20(Suppl. A):S419 S424, [3] C. S. Adjiman, S. Dallwig, C. A. Floudas, and A. Neumaier. A global optimization method, αbb, for general twice-differentiable constrained NLPs - I. Theoretical advances. Computers & Chemical Engineering, 22(9): , [4] C. S. Adjiman and C. A. Floudas. Rigorous convex underestimators for general twice-differentiable problems. Journal of Global Optimization, 9(1):23 40, [5] I. G. Akrotirianakis and C. A. Floudas. Computational experience with a new class of convex underestimators: Box-constrained NLP problems. Journal of Global Optimization, 29(3): , /13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
60 References II [6] I. G. Akrotirianakis and C. A. Floudas. A new class of improved convex underestimators for twice continuously differentiable constrained NLPs. Journal of Global Optimization, 30(4): , [7] G. Alefeld and G. Mayer. Interval analysis: Theory and applications. Journal of Computational and Applied Mathematics, 121(1-2): , [8] I. P. Androulakis, C. D. Maranas, and C. A. Floudas. αbb: A global optimization method for general constrained nonconvex problems. Journal of Global Optimization, 7(4): , [9] K. S. Du and R. B. Kearfott. The cluster problem in multivariate global optimization. Journal of Global Optimization, 5(3): , [10] C. E. Gounaris and C. A. Floudas. Tight convex underestimators for C-2-continuous problems: I. univariate functions. Journal of Global Optimization, 42(1):51 67, [11] C. E. Gounaris and C. A. Floudas. Tight convex underestimators for C-2-continuous problems: II. multivariate functions. Journal of Global Optimization, 42(1):69 89, /13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
61 References III [12] C. D. Maranas and C. A. Floudas. A global optimization approach for Lennard-Jones microclusters. Journal of Chemical Physics, 97(10): , [13] C. D. Maranas and C. A. Floudas. Global optimization for molecular conformation problems. Annals of Operation Research, 42(3):85 117, [14] C. D. Maranas and C. A. Floudas. Global minimum potential energy conformations of small molecules. Journal of Global Optimization, 4: , [15] C. D. Maranas and C. A. Floudas. Finding all solutions of nonlinearly constrained systems of equations. Journal of Global Optimization, 7(2): , [16] G. P. McCormick. Computability of global solutions to factorable nonconvex programs: Part I. Convex underestimating problems. Mathematical Programming, 10(1): , [17] G. P. McCormick. Nonlinear Programming: Theory, Algorithms and Applications. John Wiley and Sons, New York, /13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
62 References IV [18] Alexander Mitsos and Paul I. Barton. A dual extremum principle in thermodynamics. AIChE Journal, 53(8): , [19] Alexander Mitsos, Benoît Chachuat, and Paul I. Barton. McCormick-based relaxations of algorithms. SIAM Journal on Optimization, 20(2): , [20] R. Moore. Methods and Applications of Interval Analysis. SIAM, Philadelphia, PA, [21] Agustín Bompadre and Alexander Mitsos. Convergence rate of McCormick relaxations. Journal of Global Optimization, 52(1):1 28, [22] Agustín Bompadre, Alexander Mitsos, and Benoît Chachuat. Convergence analysis of Taylor models and McCormick-Taylor models. Journal of Global Optimization, 57(1):75 114, [23] A. Neumaier and O. Shcherbina. Safe bounds in linear and mixed-integer linear programming. Mathematical Programming, 99(2): , /13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
63 References V [24] A. M. Sahlodin and B. Chachuat. Convex/concave relaxations of parametric ODEs using Taylor models. Computers & Chemical Engineering, 35(5): , [25] A. M. Sahlodin and B. Chachuat. Discretize-then-relax approach for convex/concave relaxations of the solutions of parametric ODEs. Applied Numerical Mathematics, 61(7): , [26] A. B. Singer and P. I. Barton. Global solution of optimization problems with parameter-embedded linear dynamic systems. Journal of Optimization Theory and Applications, 121(3): , [27] A. B. Singer and P. I. Barton. Bounding the solutions of parameter dependent nonlinear ordinary differential equations. SIAM Journal on Scientific Computing, 27(6): , [28] A. B. Singer and P. I. Barton. Global optimization with nonlinear ordinary differential equations. Journal of Global Optimization, 34(2): , [29] E. M. B. Smith and C. C. Pantelides. Global optimisation of nonconvex MINLPs. Computers & Chemical Engineering, 21(Suppl. S):S791 S796, /13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
64 References VI [30] M. Tawarmalani and N. V. Sahinidis. Semidefinite relaxations of fractional programs via novel convexification techniques. Journal of Global Optimization, 20(2): , [31] Angelos Tsoukalas and Alexander Mitsos. Multivariate McCormick relaxations. Journal of Global Optimization, 59: , /13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
65 Relaxations of the Sum of two Functions Theorem (Relaxation of Sum) For i {1, 2} let f i : Z R n R and (fi,y u, f i,y o ) Y IR n,y Z be a scheme of estimators. Then, (f1,y u + f 2,Y u, f 1,Y o + f 2,Y o ) Y IR n,y Z is a scheme of estimators of f = f 1 + f 2. 7/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
66 Relaxations of the Sum of two Functions Theorem (Relaxation of Sum) For i {1, 2} let f i : Z R n R and (fi,y u, f i,y o ) Y IR n,y Z be a scheme of estimators. Then, (f1,y u + f 2,Y u, f 1,Y o + f 2,Y o ) Y IR n,y Z is a scheme of estimators of f = f 1 + f 2. Convergence order in the Hausdorff metric is not propagated Example: Z = [ 1, 1], f1(z) = z, f 2(z) = z, and f (z) = f 1(z) + f 2(z) 0. Let f1,y u (z) = zy L, f1,y o (z) = zy U, f2,y u (z) = zy U and f2,y o (z) = zy L. The estimator schemes of f i have arbitrarily high Hausdorff convergence order (β i ) whereas the estimator scheme of f only linear (β = 1) 7/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
67 Relaxations of the Sum of two Functions Theorem (Relaxation of Sum) For i {1, 2} let f i : Z R n R and (fi,y u, f i,y o ) Y IR n,y Z be a scheme of estimators. Then, (f1,y u + f 2,Y u, f 1,Y o + f 2,Y o ) Y IR n,y Z is a scheme of estimators of f = f 1 + f 2. Convergence order in the Hausdorff metric is not propagated Example: Z = [ 1, 1], f1(z) = z, f 2(z) = z, and f (z) = f 1(z) + f 2(z) 0. Let f1,y u (z) = zy L, f1,y o (z) = zy U, f2,y u (z) = zy U and f2,y o (z) = zy L. The estimator schemes of f i have arbitrarily high Hausdorff convergence order (β i ) whereas the estimator scheme of f only linear (β = 1) Pointwise convergence order is propagated Let the estimator schemes of gi each have pointwise convergence of order γ i > 0. Then, the estimator scheme of g has pointwise convergence of order min{γ 1, γ 2}. Proof idea: triangle inequality 7/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
68 Relaxations of the Product of two Functions Let f 1, f 2 : Z R n R be two functions. Let (fi,y u, fi,y o ) Y IR n,y Z, (fi,y L, fi,y U ) Y IR n,y Z be schemes of estimators and constant estimators of f i, i = 1, 2. Consider the intermediate functions f a1,y = min{f L 2,Y f u 1,Y, f L 2,Y f o 1,Y }, f a2,y = min{f L 1,Y f u 2,Y, f L 1,Y f o 2,Y } f b1,y = min{f U 2,Y f u 1,Y, f U 2,Y f o 1,Y }, f b2,y = min{f U 1,Y f u 2,Y, f U 1,Y f o 2,Y } f c1,y = max{f L 2,Y f u 1,Y, f L 2,Y f o 1,Y }, f c2,y = max{f U 1,Y f u 2,Y, f U 1,Y f o 2,Y } f d1,y = max{f U 2,Y f u 1,Y, f U 2,Y f o 1,Y }, f d2,y = max{f L 1,Y f u 2,Y, f L 1,Y f o 2,Y }. For Y IR n, Y Z, let fy u, fy o : Y R be such that fy u = max{f a1,y + f a2,y f1,y L f2,y L, f b1,y + f b2,y f1,y U f U 2,Y }, f o Y = min{f c1,y + f c2,y f U 1,Y f L 2,Y, f d1,y + f d2,y f L 1,Y f U 2,Y }. Then, (f u Y, f o Y ) Y Z is a scheme of estimators of f = f 1 f 2 on Z. 8/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
69 Relaxations of the Product of two Functions Let f 1, f 2 : Z R n R be two functions. Let (fi,y u, fi,y o ) Y IR n,y Z, (fi,y L, fi,y U ) Y IR n,y Z be schemes of estimators and constant estimators of f i, i = 1, 2. Consider the intermediate functions f a1,y = min{f L 2,Y f u 1,Y, f L 2,Y f o 1,Y }, f a2,y = min{f L 1,Y f u 2,Y, f L 1,Y f o 2,Y } f b1,y = min{f U 2,Y f u 1,Y, f U 2,Y f o 1,Y }, f b2,y = min{f U 1,Y f u 2,Y, f U 1,Y f o 2,Y } f c1,y = max{f L 2,Y f u 1,Y, f L 2,Y f o 1,Y }, f c2,y = max{f U 1,Y f u 2,Y, f U 1,Y f o 2,Y } f d1,y = max{f U 2,Y f u 1,Y, f U 2,Y f o 1,Y }, f d2,y = max{f L 1,Y f u 2,Y, f L 1,Y f o 2,Y }. For Y IR n, Y Z, let fy u, fy o : Y R be such that fy u = max{f a1,y + f a2,y f1,y L f2,y L, f b1,y + f b2,y f1,y U f U 2,Y }, f o Y = min{f c1,y + f c2,y f U 1,Y f L 2,Y, f d1,y + f d2,y f L 1,Y f U 2,Y }. Then, (f u Y, f o Y ) Y Z is a scheme of estimators of f = f 1 f 2 on Z. Convergence order in the Hausdorff metric is not propagated, e.g., (1 z)(1 + z) with constant schemes for (1 z) & (1 + z) 8/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
70 Relaxations of the Product of two Functions Let f 1, f 2 : Z R n R be two functions. Let (fi,y u, fi,y o ) Y IR n,y Z, (fi,y L, fi,y U ) Y IR n,y Z be schemes of estimators and constant estimators of f i, i = 1, 2. Consider the intermediate functions f a1,y = min{f L 2,Y f u 1,Y, f L 2,Y f o 1,Y }, f a2,y = min{f L 1,Y f u 2,Y, f L 1,Y f o 2,Y } f b1,y = min{f U 2,Y f u 1,Y, f U 2,Y f o 1,Y }, f b2,y = min{f U 1,Y f u 2,Y, f U 1,Y f o 2,Y } f c1,y = max{f L 2,Y f u 1,Y, f L 2,Y f o 1,Y }, f c2,y = max{f U 1,Y f u 2,Y, f U 1,Y f o 2,Y } f d1,y = max{f U 2,Y f u 1,Y, f U 2,Y f o 1,Y }, f d2,y = max{f L 1,Y f u 2,Y, f L 1,Y f o 2,Y }. For Y IR n, Y Z, let fy u, fy o : Y R be such that fy u = max{f a1,y + f a2,y f1,y L f2,y L, f b1,y + f b2,y f1,y U f U 2,Y }, f o Y = min{f c1,y + f c2,y f U 1,Y f L 2,Y, f d1,y + f d2,y f L 1,Y f U 2,Y }. Then, (f u Y, f o Y ) Y Z is a scheme of estimators of f = f 1 f 2 on Z. Convergence order in the Hausdorff metric is not propagated, e.g., (1 z)(1 + z) with constant schemes for (1 z) & (1 + z) Pointwise convergence order is propagated up to order 2: γ = min{γ 1, γ 2, 2} 8/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
71 Relaxations of the Composition of two Functions Theorem (McCormick s Composition Theorem) Let f : Z R n R be continuous, let F : X R, and let g = F f. Let (fy u, f Y o) Y IR n,y Z be a scheme of continuous estimators of f in Z with associated inclusion function H f. Let T be an inclusion function of f that also estimates H f. Let (FY u, F Y o ) Q IR,Q X be a scheme of continuous estimators of F min / max in X. For each Q X, let x be a point where FY u (F Y o ) attains its Q minimum/maximum in Q. For Y IR n, Y Z, let g ( { Y u, g Y o : }) Y R: u/o (z) = F mid fy u (z), fy o min / max (z), x g u/o Y T (Y ) T (Y ) Then, (g u Y, g o Y ) Y IR n,y Z is a scheme of estimators of g = F f in Z. 9/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
72 Relaxations of the Composition of two Functions Theorem (McCormick s Composition Theorem) Let f : Z R n R be continuous, let F : X R, and let g = F f. Let (fy u, f Y o) Y IR n,y Z be a scheme of continuous estimators of f in Z with associated inclusion function H f. Let T be an inclusion function of f that also estimates H f. Let (FY u, F Y o ) Q IR,Q X be a scheme of continuous estimators of F min / max in X. For each Q X, let x be a point where FY u (F Y o ) attains its Q minimum/maximum in Q. For Y IR n, Y Z, let g ( { Y u, g Y o : }) Y R: u/o (z) = F mid fy u (z), fy o min / max (z), x g u/o Y T (Y ) T (Y ) Then, (g u Y, g o Y ) Y IR n,y Z is a scheme of estimators of g = F f in Z. High convergence order in the Hausdorff metric for estimators of f is irrelevant: β = min{β f,t, β F } 9/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
73 Relaxations of the Composition of two Functions Theorem (McCormick s Composition Theorem) Let f : Z R n R be continuous, let F : X R, and let g = F f. Let (fy u, f Y o) Y IR n,y Z be a scheme of continuous estimators of f in Z with associated inclusion function H f. Let T be an inclusion function of f that also estimates H f. Let (FY u, F Y o ) Q IR,Q X be a scheme of continuous estimators of F min / max in X. For each Q X, let x be a point where FY u (F Y o ) attains its Q minimum/maximum in Q. For Y IR n, Y Z, let g ( { Y u, g Y o : }) Y R: u/o (z) = F mid fy u (z), fy o min / max (z), x g u/o Y T (Y ) T (Y ) Then, (g u Y, g o Y ) Y IR n,y Z is a scheme of estimators of g = F f in Z. High convergence order in the Hausdorff metric for estimators of f is irrelevant: β = min{β f,t, β F } Pointwise convergence order is propagated: γ = min{γ f, γ F } 9/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
74 Absolute Value (f based on [7]) Let Z = [0, 1] and g : Z R, g(z) = z z = F (f (z)) with f (z) = z z , F (x) = x Let Y Z, s.t., Y = [0.5 ε 1, ε 1 ] Range of f : f (Y ) = [ ε 2 1, 0], image of g: ḡ(y ) = [0, ε2 1 ] Inclusion T (Y ) = [ 2ε 1 ε 2 1, 2ε 1 ε 2 1 ] (β = 1); β = 1 for g Natural interval extensions for centered form f cen (z) = (z 0.5) 2 give T (Y ) = [ ε 2 1, ε2 1 ] (β = 2); exact for g Relaxations of f : concave f cv (z) = ε 2 1 and f cc (z) = z z ; exact in the Hausdorff metric, quadratic pointwise convergence Relaxations of F = : convex F cv = F, F cc secant; exact in Hausdorff metric, linear pointwise convergence Convergence order of McCormick estimators of g in Hausdorff metric: linear for natural interval extensions of f, exact for centered form of f 10/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
75 Quadratic (f based on [7]) Let Z = [0, 1] and g : Z R, g(z) = (z z ) 2 = F (f (z)) with f (z) = z z , F (x) = (x) 2 Let Y Z, s.t., Y = [0.5 ε 1, ε 1 ] Range of f : f (Y ) = [ ε 2 1, 0], image of g: ḡ(y ) = [0, ε4 1 ] Inclusion T (Y ) = [ 2ε 1 ε 2 1, 2ε 1 ε 2 1 ] (β = 1); β = 1 for g Natural interval extensions for centered form f cen (z) = (z 0.5) 2 give T (Y ) = [ ε 2 1, ε2 1 ] (β = 2); exact for g Relaxations of f : concave f cv (z) = ε 2 1 and f cc (z) = z z ; exact in the Hausdorff metric, quadratic pointwise convergence Relaxations of F = ( ) 2 : convex F cv = F, F cc secant; exact in Hausdorff metric, quadratic pointwise convergence Convergence order of McCormick estimators of g in Hausdorff metric: superlinear for natural interval extensions of f, exact for centered form of f 11/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
76 Auxiliary Variable Reformulation Extensions & Conclusions Introduce a new variable for every factor in McCormick relaxations and construct relaxations in higher-dimensional space Example g(z) = f 1 (f 2 (z), f 3 (z)) + f 4 (z) Isolate terms introducing auxiliary variables: min z Z,w 1 Z 1 w 2 Z 2,w 3 Z 3,w 4 Z 4 w 1 + w 4 s.t. w 1 = f 1 (w 2, w 3 ) w 2 = f 2 (z) w 3 = f 3 (z) w 4 = f 4 (z) Relax terms one by one to obtain a convex relaxation. 12/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
77 McCormick as Decomposition Extensions & Conclusions Auxiliary Variable Reformulation for example Proposed Multivariate McCormick min z min w 1 + w 4 z Z,w 1 Z 1 w 2 Z 2,w 3 Z 3,w 4 Z 4 s.t. f cv cc 1 (w2, w3) w1 f1 (w2, w3) f cv 2 w 2 f cc 2 (z) f cv 3 w 3 f cc 3 (z) f cv 4 w 4 f cc 4 (z) min w w 1,w 1 + w 4 4 min f w 2,w 3 1 cv (w 2, w 3 ) max f s.t. s.t. f 2 cv (z) w 2 f 2 cc (z) w 2,w 3 1 cc (w 2, w 3 ) w 1 f 3 cv (z) w 3 f 3 cc s.t. f cv (z) 2 (z) w 2 f 2 cc (z) f 3 cv (z) w 3 f 3 cc (z) f 4 cv (z) w 4 f 4 cc (z) The proposed McCormick-type relaxation can be interpreted as a decomposition method for solving the relaxed auxiliary variable reformulation. Closed-form solutions typically possible for sub-problems, overall problem becomes nonsmooth.. 13/13 McCormick Relaxations A. Mitsos AVT.SVT February, 2015
Deterministic Global Optimization Algorithm and Nonlinear Dynamics
Deterministic Global Optimization Algorithm and Nonlinear Dynamics C. S. Adjiman and I. Papamichail Centre for Process Systems Engineering Department of Chemical Engineering and Chemical Technology Imperial
More informationSolving Mixed-Integer Nonlinear Programs
Solving Mixed-Integer Nonlinear Programs (with SCIP) Ambros M. Gleixner Zuse Institute Berlin MATHEON Berlin Mathematical School 5th Porto Meeting on Mathematics for Industry, April 10 11, 2014, Porto
More informationImplementation of an αbb-type underestimator in the SGO-algorithm
Implementation of an αbb-type underestimator in the SGO-algorithm Process Design & Systems Engineering November 3, 2010 Refining without branching Could the αbb underestimator be used without an explicit
More informationA Test Set of Semi-Infinite Programs
A Test Set of Semi-Infinite Programs Alexander Mitsos, revised by Hatim Djelassi Process Systems Engineering (AVT.SVT), RWTH Aachen University, Aachen, Germany February 26, 2016 (first published August
More informationA Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations
A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations Pradeep K. Polisetty and Edward P. Gatzke 1 Department of Chemical Engineering University of South Carolina Columbia, SC 29208
More informationMixed Integer Non Linear Programming
Mixed Integer Non Linear Programming Claudia D Ambrosio CNRS Research Scientist CNRS & LIX, École Polytechnique MPRO PMA 2016-2017 Outline What is a MINLP? Dealing with nonconvexities Global Optimization
More informationValid Inequalities and Convex Hulls for Multilinear Functions
Electronic Notes in Discrete Mathematics 36 (2010) 805 812 www.elsevier.com/locate/endm Valid Inequalities and Convex Hulls for Multilinear Functions Pietro Belotti Department of Industrial and Systems
More informationGLOBAL OPTIMIZATION WITH GAMS/BARON
GLOBAL OPTIMIZATION WITH GAMS/BARON Nick Sahinidis Chemical and Biomolecular Engineering University of Illinois at Urbana Mohit Tawarmalani Krannert School of Management Purdue University MIXED-INTEGER
More informationMixed-Integer Nonlinear Programming
Mixed-Integer Nonlinear Programming Claudia D Ambrosio CNRS researcher LIX, École Polytechnique, France pictures taken from slides by Leo Liberti MPRO PMA 2016-2017 Motivating Applications Nonlinear Knapsack
More informationHeuristics and Upper Bounds for a Pooling Problem with Cubic Constraints
Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints Matthew J. Real, Shabbir Ahmed, Helder Inàcio and Kevin Norwood School of Chemical & Biomolecular Engineering 311 Ferst Drive, N.W.
More information4y Springer NONLINEAR INTEGER PROGRAMMING
NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai
More informationSafe and Tight Linear Estimators for Global Optimization
Safe and Tight Linear Estimators for Global Optimization Glencora Borradaile and Pascal Van Hentenryck Brown University Bo 1910 Providence, RI, 02912 {glencora, pvh}@cs.brown.edu Abstract. Global optimization
More informationA global optimization method, αbb, for general twice-differentiable constrained NLPs I. Theoretical advances
Computers Chem. Engng Vol. 22, No. 9, pp. 1137 1158, 1998 Copyright 1998 Elsevier Science Ltd All rights reserved. Printed in Great Britain PII: S0098-1354(98)00027-1 0098 1354/98 $19.00#0.00 A global
More information1 Convexity, Convex Relaxations, and Global Optimization
1 Conveity, Conve Relaations, and Global Optimization Algorithms 1 1.1 Minima Consider the nonlinear optimization problem in a Euclidean space: where the feasible region. min f(), R n Definition (global
More information1 Introduction A large proportion of all optimization problems arising in an industrial or scientic context are characterized by the presence of nonco
A Global Optimization Method, BB, for General Twice-Dierentiable Constrained NLPs I. Theoretical Advances C.S. Adjiman, S. Dallwig 1, C.A. Floudas 2 and A. Neumaier 1 Department of Chemical Engineering,
More informationExact Computation of Global Minima of a Nonconvex Portfolio Optimization Problem
Frontiers In Global Optimization, pp. 1-2 C. A. Floudas and P. M. Pardalos, Editors c 2003 Kluwer Academic Publishers Exact Computation of Global Minima of a Nonconvex Portfolio Optimization Problem Josef
More informationConvex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams
Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams John E. Mitchell, Jong-Shi Pang, and Bin Yu Original: June 10, 2011 Abstract Nonconvex quadratic constraints can be
More informationValid inequalities for sets defined by multilinear functions
Valid inequalities for sets defined by multilinear functions Pietro Belotti 1 Andrew Miller 2 Mahdi Namazifar 3 3 1 Lehigh University 200 W Packer Ave Bethlehem PA 18018, USA belotti@lehighedu 2 Institut
More informationAlgorithms for Nonsmooth Optimization
Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More informationLifted Inequalities for 0 1 Mixed-Integer Bilinear Covering Sets
1 2 3 Lifted Inequalities for 0 1 Mixed-Integer Bilinear Covering Sets Kwanghun Chung 1, Jean-Philippe P. Richard 2, Mohit Tawarmalani 3 March 1, 2011 4 5 6 7 8 9 Abstract In this paper, we study 0 1 mixed-integer
More informationOnline generation via offline selection - Low dimensional linear cuts from QP SDP relaxation -
Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Radu Baltean-Lugojan Ruth Misener Computational Optimisation Group Department of Computing Pierre Bonami Andrea
More informationAnalyzing the computational impact of individual MINLP solver components
Analyzing the computational impact of individual MINLP solver components Ambros M. Gleixner joint work with Stefan Vigerske Zuse Institute Berlin MATHEON Berlin Mathematical School MINLP 2014, June 4,
More informationSolving Global Optimization Problems by Interval Computation
Solving Global Optimization Problems by Interval Computation Milan Hladík Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic, http://kam.mff.cuni.cz/~hladik/
More informationNOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or
NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or other reproductions of copyrighted material. Any copying
More informationIncreasing Energy Efficiency in Industrial Plants: Optimising Heat Exchanger Networks
Increasing Energy Efficiency in Industrial Plants: Optimising Heat Exchanger Networks Author: Miten Mistry Supervisor: Dr Ruth Misener Second Marker: Dr Panos Parpas Abstract We live in a world where
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationNonlinear convex and concave relaxations for the solutions of parametric ODEs
OPTIMAL CONTROL APPLICATIONS AND METHODS Optim. Control Appl. Meth. 0000; 00:1 22 Published online in Wiley InterScience (www.interscience.wiley.com). Nonlinear convex and concave relaxations for the solutions
More informationConvexification of Mixed-Integer Quadratically Constrained Quadratic Programs
Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,
More informationDeterministic Global Optimization for Dynamic Systems Using Interval Analysis
Deterministic Global Optimization for Dynamic Systems Using Interval Analysis Youdong Lin and Mark A. Stadtherr Department of Chemical and Biomolecular Engineering University of Notre Dame, Notre Dame,
More informationGlobal Optimization by Interval Analysis
Global Optimization by Interval Analysis Milan Hladík Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic, http://kam.mff.cuni.cz/~hladik/
More informationA Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization
A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization Sven Leyffer 2 Annick Sartenaer 1 Emilie Wanufelle 1 1 University of Namur, Belgium 2 Argonne National Laboratory, USA IMA Workshop,
More informationComputer Sciences Department
Computer Sciences Department Valid Inequalities for the Pooling Problem with Binary Variables Claudia D Ambrosio Jeff Linderoth James Luedtke Technical Report #682 November 200 Valid Inequalities for the
More informationOptimization-based Domain Reduction in Guaranteed Parameter Estimation of Nonlinear Dynamic Systems
9th IFAC Symposium on Nonlinear Control Systems Toulouse, France, September 4-6, 2013 ThC2.2 Optimization-based Domain Reduction in Guaranteed Parameter Estimation of Nonlinear Dynamic Systems Radoslav
More informationCan Li a, Ignacio E. Grossmann a,
A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More information1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationHow to solve a design centering problem
How to solve a design centering problem The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Harwood, Stuart
More informationSubgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationSubgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality
More informationMultiperiod Blend Scheduling Problem
ExxonMobil Multiperiod Blend Scheduling Problem Juan Pablo Ruiz Ignacio E. Grossmann Department of Chemical Engineering Center for Advanced Process Decision-making University Pittsburgh, PA 15213 1 Motivation
More informationDevelopment of the new MINLP Solver Decogo using SCIP - Status Report
Development of the new MINLP Solver Decogo using SCIP - Status Report Pavlo Muts with Norman Breitfeld, Vitali Gintner, Ivo Nowak SCIP Workshop 2018, Aachen Table of contents 1. Introduction 2. Automatic
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationCan Li a, Ignacio E. Grossmann a,
A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann
More informationDevelopment of an algorithm for solving mixed integer and nonconvex problems arising in electrical supply networks
Development of an algorithm for solving mixed integer and nonconvex problems arising in electrical supply networks E. Wanufelle 1 S. Leyffer 2 A. Sartenaer 1 Ph. Toint 1 1 FUNDP, University of Namur 2
More informationIssues in the Development of Global Optimization Algorithms for Bilevel Programs with a Nonconvex Inner Program
Issues in the Development of Global Optimiation Algorithms for Bilevel Programs with a Nonconvex Inner Program Alexander Mitsos and Paul I. Barton* Department of Chemical Engineering Massachusetts Institute
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko
More informationRefined optimality conditions for differences of convex functions
Noname manuscript No. (will be inserted by the editor) Refined optimality conditions for differences of convex functions Tuomo Valkonen the date of receipt and acceptance should be inserted later Abstract
More informationImproved quadratic cuts for convex mixed-integer nonlinear programs
Improved quadratic cuts for convex mixed-integer nonlinear programs Lijie Su a,b, Lixin Tang a*, David E. Bernal c, Ignacio E. Grossmann c a Institute of Industrial and Systems Engineering, Northeastern
More informationSTRONG VALID INEQUALITIES FOR MIXED-INTEGER NONLINEAR PROGRAMS VIA DISJUNCTIVE PROGRAMMING AND LIFTING
STRONG VALID INEQUALITIES FOR MIXED-INTEGER NONLINEAR PROGRAMS VIA DISJUNCTIVE PROGRAMMING AND LIFTING By KWANGHUN CHUNG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN
More informationResearch Article Complete Solutions to General Box-Constrained Global Optimization Problems
Journal of Applied Mathematics Volume, Article ID 47868, 7 pages doi:.55//47868 Research Article Complete Solutions to General Box-Constrained Global Optimization Problems Dan Wu and Youlin Shang Department
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Introduction and a quick repetition of analysis/linear algebra First lecture, 12.04.2010 Jun.-Prof. Matthias Hein Organization of the lecture Advanced course, 2+2 hours,
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationAn Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints
An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints Oliver Stein Peter Kirst # Paul Steuermann March 22, 2013 Abstract We discuss some difficulties in determining
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationDisconnecting Networks via Node Deletions
1 / 27 Disconnecting Networks via Node Deletions Exact Interdiction Models and Algorithms Siqian Shen 1 J. Cole Smith 2 R. Goli 2 1 IOE, University of Michigan 2 ISE, University of Florida 2012 INFORMS
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationAn exact reformulation algorithm for large nonconvex NLPs involving bilinear terms
An exact reformulation algorithm for large nonconvex NLPs involving bilinear terms Leo Liberti CNRS LIX, École Polytechnique, F-91128 Palaiseau, France. E-mail: liberti@lix.polytechnique.fr. Constantinos
More informationIntersection cuts for factorable MINLP
Zuse Institute Berlin Takustr. 7 14195 Berlin Germany FELIPE SERRANO 1 Intersection cuts for factorable MINLP 1 0000-0002-7892-3951 This work has been supported by the Research Campus MODAL Mathematical
More informationSECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS
Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationIdentifying Active Constraints via Partial Smoothness and Prox-Regularity
Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationContinuous Functions on Metric Spaces
Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0
More informationQUADRATIC MAJORIZATION 1. INTRODUCTION
QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et
More informationLocal strong convexity and local Lipschitz continuity of the gradient of convex functions
Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate
More informationConvex Optimization Notes
Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationA New Look at the Performance Analysis of First-Order Methods
A New Look at the Performance Analysis of First-Order Methods Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with Yoel Drori, Google s R&D Center, Tel Aviv Optimization without
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationUnit 2: Problem Classification and Difficulty in Optimization
Unit 2: Problem Classification and Difficulty in Optimization Learning goals Unit 2 I. What is the subject area of multiobjective decision analysis and multiobjective optimization; How does it relate to
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationLagrangian Relaxation in MIP
Lagrangian Relaxation in MIP Bernard Gendron May 28, 2016 Master Class on Decomposition, CPAIOR2016, Banff, Canada CIRRELT and Département d informatique et de recherche opérationnelle, Université de Montréal,
More informationELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization
ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization Professor M. Chiang Electrical Engineering Department, Princeton University March 16, 2007 Lecture
More informationFrom structures to heuristics to global solvers
From structures to heuristics to global solvers Timo Berthold Zuse Institute Berlin DFG Research Center MATHEON Mathematics for key technologies OR2013, 04/Sep/13, Rotterdam Outline From structures to
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationImproved Big-M Reformulation for Generalized Disjunctive Programs
Improved Big-M Reformulation for Generalized Disjunctive Programs Francisco Trespalacios and Ignacio E. Grossmann Department of Chemical Engineering Carnegie Mellon University, Pittsburgh, PA 15213 Author
More informationBasic notions of Mixed Integer Non-Linear Programming
Basic notions of Mixed Integer Non-Linear Programming Claudia D Ambrosio CNRS & LIX, École Polytechnique 5th Porto Meeting on Mathematics for Industry, April 10, 2014 C. D Ambrosio (CNRS) April 10, 2014
More informationResearch Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
Mathematical Problems in Engineering Volume 2010, Article ID 346965, 12 pages doi:10.1155/2010/346965 Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
More informationAN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS
AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well
More informationSemidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming
Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Christoph Buchheim 1 and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische Universität Dortmund christoph.buchheim@tu-dortmund.de
More informationGlobal Optimization of Non-convex Generalized Disjunctive Programs: A Review on Relaxations and Solution Methods
Noname manuscript No. (will be inserted by the editor) Global Optimization of Non-convex Generalized Disjunctive Programs: A Review on Relaxations and Solution Methods Juan P. Ruiz Ignacio E. Grossmann
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationA Stochastic-Oriented NLP Relaxation for Integer Programming
A Stochastic-Oriented NLP Relaxation for Integer Programming John Birge University of Chicago (With Mihai Anitescu (ANL/U of C), Cosmin Petra (ANL)) Motivation: The control of energy systems, particularly
More informationExtended Monotropic Programming and Duality 1
March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationA multistart multisplit direct search methodology for global optimization
1/69 A multistart multisplit direct search methodology for global optimization Ismael Vaz (Univ. Minho) Luis Nunes Vicente (Univ. Coimbra) IPAM, Optimization and Optimal Control for Complex Energy and
More information