Implementation of an αbb-type underestimator in the SGO-algorithm

Similar documents
A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization

Exact Computation of Global Minima of a Nonconvex Portfolio Optimization Problem

Deterministic Global Optimization Algorithm and Nonlinear Dynamics

Global Optimization by Interval Analysis

A global optimization method, αbb, for general twice-differentiable constrained NLPs I. Theoretical advances

A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations

GLOBAL OPTIMIZATION WITH GAMS/BARON

Solving Global Optimization Problems by Interval Computation

Can Li a, Ignacio E. Grossmann a,

1 Introduction A large proportion of all optimization problems arising in an industrial or scientic context are characterized by the presence of nonco

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

Numerical Optimization. Review: Unconstrained Optimization

Solving Mixed-Integer Nonlinear Programs

Can Li a, Ignacio E. Grossmann a,

Development of an algorithm for solving mixed integer and nonconvex problems arising in electrical supply networks

Optimization of a Nonlinear Workload Balancing Problem

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints

Optimization in Process Systems Engineering

Development of the new MINLP Solver Decogo using SCIP - Status Report

Deterministic Global Optimization of Nonlinear Dynamic Systems

MILP reformulation of the multi-echelon stochastic inventory system with uncertain demands

Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming

Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

Basic notions of Mixed Integer Non-Linear Programming

McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Function

Alpha-helical Topology and Tertiary Structure Prediction of Globular Proteins Scott R. McAllister Christodoulos A. Floudas Princeton University

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams

Improved quadratic cuts for convex mixed-integer nonlinear programs

Mixed Integer Non Linear Programming

A Test Set of Semi-Infinite Programs

Increasing Energy Efficiency in Industrial Plants: Optimising Heat Exchanger Networks

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Deterministic Global Optimization for Dynamic Systems Using Interval Analysis

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse

Mixed-Integer Nonlinear Programming

16.410/413 Principles of Autonomy and Decision Making

Research Article Complete Solutions to General Box-Constrained Global Optimization Problems

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

An interval-matrix branch-and-bound algorithm for bounding eigenvalues

Software for Integer and Nonlinear Optimization

Operations Research Lecture 6: Integer Programming

Obtaining Tighter Relaxations of Mathematical Programs with Complementarity Constraints

Appendix A Taylor Approximations and Definite Matrices

where X is the feasible region, i.e., the set of the feasible solutions.

Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints

Relaxations of multilinear convex envelopes: dual is better than primal

STRONG VALID INEQUALITIES FOR MIXED-INTEGER NONLINEAR PROGRAMS VIA DISJUNCTIVE PROGRAMMING AND LIFTING

Feasibility Pump for Mixed Integer Nonlinear Programs 1

MATH 4211/6211 Optimization Basics of Optimization Problems

Computer Sciences Department

Math Models of OR: Branch-and-Bound

Gomory Cuts. Chapter 5. Linear Arithmetic. Decision Procedures. An Algorithmic Point of View. Revision 1.0

From structures to heuristics to global solvers

Heuristics for nonconvex MINLP

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Hot-Starting NLP Solvers

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Integer Programming. Wolfram Wiesemann. December 6, 2007

Solving nonconvex MINLP by quadratic approximation

Nonconvex Generalized Benders Decomposition Paul I. Barton

A Computational Comparison of Branch and Bound and. Outer Approximation Algorithms for 0-1 Mixed Integer. Brian Borchers. John E.

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

REGULAR-SS12: The Bernstein branch-and-prune algorithm for constrained global optimization of multivariate polynomial MINLPs

23. Cutting planes and branch & bound

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Module 04 Optimization Problems KKT Conditions & Solvers

Solving systems of polynomial equations and minimization of multivariate polynomials

ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization

Valid Inequalities and Convex Hulls for Multilinear Functions

Quasiconvex relaxations based on interval arithmetic

Linear integer programming and its application

IBM Research Report. Branching and Bounds Tightening Techniques for Non-Convex MINLP

The IMA Volumes in Mathematics and its Applications Volume 154

An RLT Approach for Solving Binary-Constrained Mixed Linear Complementarity Problems

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Using Convex Nonlinear Relaxations in the Global Optimization of Nonconvex Generalized Disjunctive Programs

Mixed Integer Nonlinear Programs Theory, Algorithms & Applications

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope

On global optimization with indefinite quadratics

The moment-lp and moment-sos approaches

Safe and Tight Linear Estimators for Global Optimization

Ville-Pekka Eronen Marko M. Mäkelä Tapio Westerlund. Nonsmooth Extended Cutting Plane Method for Generally Convex MINLP Problems

3.10 Lagrangian relaxation

Mathematical Programming techniques in Water Network Optimization

Analyzing the computational impact of individual MINLP solver components

A Review and Comparison of Solvers for Convex MINLP

Integer and Combinatorial Optimization: Introduction

An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory

On Rigorous Upper Bounds to a Global Optimum

Technische Universität Ilmenau Institut für Mathematik

Multiperiod Blend Scheduling Problem

Benders Decomposition

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

Joint Metering and Conflict Resolution in Air Traffic Control

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Decision Procedures An Algorithmic Point of View

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators

Transcription:

Implementation of an αbb-type underestimator in the SGO-algorithm Process Design & Systems Engineering November 3, 2010

Refining without branching Could the αbb underestimator be used without an explicit branching framework? (cf. the SGO algorithm)

Refining without branching Could the αbb underestimator be used without an explicit branching framework? (cf. the SGO algorithm) We developed a convex formulation that handles breakpoints with binary variables instead of direct branching

Refining without branching Could the αbb underestimator be used without an explicit branching framework? (cf. the SGO algorithm) We developed a convex formulation that handles breakpoints with binary variables instead of direct branching Why?

Refining without branching Could the αbb underestimator be used without an explicit branching framework? (cf. the SGO algorithm) We developed a convex formulation that handles breakpoints with binary variables instead of direct branching Why? It can readily be integrated with the SGO algorithm It could turn out to be especially well-suited for some types of mixed-integer problems As a convex reformulation it could be of interest in automated reformulation procedures

Refining without branching 4 3 2 1 0 1 1 2 3 4 5

Refining without branching 4 3 2 1 0 1 1 2 3 4 5 α 1 = 1, α 2 = 0.6, α 3 = 0

Refining without branching 4 3 2 1 0 1 1 2 3 4 5 α = max k α k

Refining without branching The underestimation error can be described as the difference of a parabola and a piecewise linear function. 15 15 10 5 0 1 2 3 4 5 10 5 0 1 2 3 4 5

Refining without branching Our formulation: (1D for clarity) f (x) 0 (1)

Refining without branching Our formulation: (1D for clarity) f (x) + αx 2 0 (1)

Refining without branching Our formulation: (1D for clarity) f (x) + αx 2 W 0 (1)

Refining without branching Our formulation: (1D for clarity) f (x) + αx 2 W 0 (1) W = αx 2 (2)

Refining without branching Our formulation: (1D for clarity) f (x) + αx 2 W 0 (1) W = αx 2 (2) Overestimating W will relax the feasible domain, we replace W with a piecewise linear function where K Ŵ = A k b k + (B k A k )s k (3) k=1 2 A k = αx k 2 B k = αx k (x k and x k denote the interval endpoints)

Refining without branching The b k are binary variables, s k are continuous and nonnegative. K Ŵ = A k b k + (B k A k )s k k=1

Refining without branching The b k are binary variables, s k are continuous and nonnegative. K Ŵ = A k b k + (B k A k )s k k=1 We relate x to b k and s k with the constraints K b k = 1 k=1 s k b k, k K x = x k b k + (x k x k )s k k=1

Refining without branching The b k are binary variables, s k are continuous and nonnegative. K Ŵ = A k b k + (B k A k )s k k=1 We relate x to b k and s k with the constraints K b k = 1 k=1 s k b k, k K x = x k b k + (x k x k )s k k=1 Every constraint is convex and the feasible set is relaxed

In two dimensions An example: f (x, y) = sin(x + y) + x cos y 1 x 7, 1 y 9 6 4 2 2 0 2 8 6 4 2

Underestimator - no breakpoints 6 x 4 2 0 20 40 8 6 y 4 2

Underestimator - 1 breakpoint 6 x 4 2 0 10 20 8 6 y 4 2

Underestimator - 1+1 breakpoints 6 x 4 2 0 5 10 15 8 6 y 4 2

Underestimator - 1+2 breakpoints 6 x 4 2 0 5 10 8 6 y 4 2

Underestimator - 1+3 breakpoints 6 x 4 2 0 5 8 6 y 4 2

Underestimator - 3+3 breakpoints 8 y 6 4 2 2 0-2 -4 2 4 6 Anders Skja l x

Constraint feasibility 8 6 4 2 1 2 3 4 5 6 7

Constraint feasibility - 1+1 breakpoints 8 6 4 2 1 2 3 4 5 6 7

Constraint feasibility - 3+3 breakpoints 8 6 4 2 1 2 3 4 5 6 7

Constraint feasibility - 7+7 breakpoints 8 6 4 2 1 2 3 4 5 6 7

About convergence The largest underestimation error in a subdomain depends only on the value of α i, i = 1,... n and the size of the subdomain: (1D) ( max Ŵ αx 2 = max α(x x xk x k k)(x x k ) = α x [x k,x k ] x [x k,x k ] 2 An ɛ precision is guaranteed if the width of the interval 4ɛ x k x k α. The algorithm will converge ) 2

Challenges & Ideas The subproblems grow as we add breakpoints, the branching is hidden in the complexity of the convex MINLPs

Challenges & Ideas The subproblems grow as we add breakpoints, the branching is hidden in the complexity of the convex MINLPs The increase in complexity depends on the number of breakpoints, not directly on the number of constraints

Challenges & Ideas The subproblems grow as we add breakpoints, the branching is hidden in the complexity of the convex MINLPs The increase in complexity depends on the number of breakpoints, not directly on the number of constraints Bound reductions are only partially applicable

Challenges & Ideas The subproblems grow as we add breakpoints, the branching is hidden in the complexity of the convex MINLPs The increase in complexity depends on the number of breakpoints, not directly on the number of constraints Bound reductions are only partially applicable We get less information about subdomains as compared to branch-and-bound

Challenges & Ideas The subproblems grow as we add breakpoints, the branching is hidden in the complexity of the convex MINLPs The increase in complexity depends on the number of breakpoints, not directly on the number of constraints Bound reductions are only partially applicable We get less information about subdomains as compared to branch-and-bound A type of minor breakpoint halving every interval can be introduced without too much cost

Integration with SGO Preprocessing step: Transformations obtained by solving an MILP problem The nonconvex terms are replaced by convex underestimators; the PLFs are initialized The convexified and overestimated problem is solved using a convex MINLP solver Update approximations no Are the termination criteria fulfilled? yes Optimal solution

Integration with SGO Preprocessing step: Transformations obtained by solving an MILP problem The nonconvex terms are replaced by convex underestimators; the PLFs are initialized The convexified and overestimated problem is solved using a convex MINLP solver Update approximations no Are the termination criteria fulfilled? yes Optimal solution

References C.S. Adjiman, S. Dallwig, C.A. Floudas, and A. Neumaier. A global optimization method, αbb, for general twice-differentiable constrained NLPs I. Computers & Chemical Engineering, 22(9):1137 1158, 1998. C.A. Floudas. Deterministic Global Optimization. Kluwer Academic Publishers, 2000. A. Lundell, J. Westerlund, and T. Westerlund. Some transformation techniques with applications in global optimization. Journal of Global Optimization, 43(2):391 402, 2009. A. Lundell and T. Westerlund. Convex underestimation strategies for signomial functions. Optimization Methods and Software, 24:505 522, 2009.

Thank you

C 2 constraints We are interested in handling constraints f (x) 0 where f C 2, i.e. f is twice continuously differentiable.

C 2 constraints We are interested in handling constraints f (x) 0 where f C 2, i.e. f is twice continuously differentiable. In addition C 2 objective functions can be handled by rewriting min f (x) as min µ s.t. f (x) µ 0

The αbb underestimator A C 2 function f on the domain [x L, x U ] R can always be convexified by adding a parabola p(x) = α(x x L )(x x U ) with a large enough α. 4 2 function parabola underest. 0 2 4 1 2 3 4 5 x

The αbb underestimator A C 2 function f on the domain [x L, x U ] R can always be convexified by adding a parabola p(x) = α(x x L )(x x U ) with a large enough α. 4 2 function parabola underest. 0 2 4 1 2 3 4 5 x

The αbb underestimator This convex underestimator can be extended to multiple dimensions. Let f be a C 2 function on R n. For a large enough α the function g = f + q where is convex. q(x 1,..., x n ) = α n (x i xi L )(x i xi U ) i=1

The αbb underestimator This convex underestimator can be extended to multiple dimensions. Let f be a C 2 function on R n. For a large enough α the function g = f + q where q(x 1,..., x n ) = α n (x i xi L )(x i xi U ) is convex. Tighter underestimators can be found by letting α depend on i i=1 q(x 1,..., x n ) = n α i (x i xi L )(x i xi U ) i=1

The αbb underestimator How large must we choose α i? One dimension: g is convex if g = f + 2α 0

The αbb underestimator How large must we choose α i? One dimension: g is convex if g = f + 2α 0 In general: g is convex if the Hessian matrix H g is positive semi-definite

The αbb underestimator How large must we choose α i? One dimension: g is convex if g = f + 2α 0 In general: g is convex if the Hessian matrix H g is positive semi-definite H g = H f + 2 diag(α i ) = 2 f x 1 x 1 2 f x 1 x n α. 1.... + 2... 2 f x n x 1 2 f x n x n α n

The αbb underestimator How large must we choose α i? One dimension: g is convex if g = f + 2α 0 In general: g is convex if the Hessian matrix H g is positive semi-definite H g = H f + 2 diag(α i ) = 2 f x 1 x 1 2 f x 1 x n α. 1.... + 2... 2 f x n x 1 2 f x n x n Choose α i such that the eigenvalues of H g are nonnegative on the relevant domain α n

The αbb underestimator The α i are calculated (e.g.) by utilizing interval arithmetic and Gershgorin s circle theorem

The αbb underestimator The α i are calculated (e.g.) by utilizing interval arithmetic and Gershgorin s circle theorem Smaller valid choices usually exist, but finding the optimal α i is in general as hard as the optimization problem itself

The αbb underestimator The α i are calculated (e.g.) by utilizing interval arithmetic and Gershgorin s circle theorem Smaller valid choices usually exist, but finding the optimal α i is in general as hard as the optimization problem itself Branch-and-bound methods can be used to solve the original problem

Branch-and-bound search Branch to split a domain in two and get tighter underestimators

Branch-and-bound search Branch to split a domain in two and get tighter underestimators Any feasible solution to the original problem gives an upper bound on the optimal objective value

Branch-and-bound search Branch to split a domain in two and get tighter underestimators Any feasible solution to the original problem gives an upper bound on the optimal objective value Any branch with a lower bound greater than the best found upper bound is fathomed (cut)

Branch-and-bound search Branch to split a domain in two and get tighter underestimators Any feasible solution to the original problem gives an upper bound on the optimal objective value Any branch with a lower bound greater than the best found upper bound is fathomed (cut) A number of techniques are used to speed up the search, e.g. bound reduction