Solving Global Optimization Problems by Interval Computation

Similar documents
Global Optimization by Interval Analysis

Interval Robustness in Linear Programming

Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

arxiv: v1 [math.na] 28 Jun 2013

Solving Interval Linear Equations with Modified Interval Arithmetic

EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS

1. Introduction. We consider the general global optimization problem

Implementation of an αbb-type underestimator in the SGO-algorithm

A Branch-and-Bound Algorithm for Unconstrained Global Optimization

Introduction to Interval Computation

Institute of Computer Science. Compact Form of the Hansen-Bliek-Rohn Enclosure

Optimal Preconditioning for the Interval Parametric Gauss Seidel Method

Interval unions. BIT manuscript No. (will be inserted by the editor) Hermann Schichl Ferenc Domes Tiago Montanher Kevin Kofler

Bounds on eigenvalues of complex interval matrices

1 Introduction A large proportion of all optimization problems arising in an industrial or scientic context are characterized by the presence of nonco

arxiv: v1 [cs.na] 7 Jul 2017

Interval linear algebra

Verified Branch and Bound for Singular Linear and Nonlinear Programs An Epsilon-Inflation Process

Verified Global Optimization with Taylor Model based Range Bounders

Interval solutions for interval algebraic equations

Fast Verified Solutions of Sparse Linear Systems with H-matrices

Inverse interval matrix: A survey

Quasiconvex relaxations based on interval arithmetic

A global optimization method, αbb, for general twice-differentiable constrained NLPs I. Theoretical advances

Deterministic Global Optimization Algorithm and Nonlinear Dynamics

Comparison between Interval and Fuzzy Load Flow Methods Considering Uncertainty

Computation of Interval Extensions Using Berz-Taylor Polynomial Models

Optimal Preconditioners for Interval Gauss Seidel Methods

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

INTERVAL MATHEMATICS TECHNIQUES FOR CONTROL THEORY COMPUTATIONS

On Rigorous Upper Bounds to a Global Optimum

Global optimization on Stiefel manifolds some particular problem instances

Sharpening the Karush-John optimality conditions

Interval Gauss-Seidel Method for Generalized Solution Sets to Interval Linear Systems

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Exclusion regions for systems of equations

Optimisation in Higher Dimensions

How to Explain Usefulness of Different Results when Teaching Calculus: Example of the Mean Value Theorem

Advice for Mathematically Rigorous Bounds on Optima of Linear Programs

arxiv: v1 [math.na] 11 Sep 2018

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Global optimization seeks to find the best solution to a

Generation of Bode and Nyquist Plots for Nonrational Transfer Functions to Prescribed Accuracy

2.098/6.255/ Optimization Methods Practice True/False Questions

Institute of Computer Science

Bounds on the radius of nonsingularity

Constrained Optimization

Algorithms for nonlinear programming problems II

An Eigenvalue Symmetric Matrix Contractor

A New Trust Region Algorithm Using Radial Basis Function Models

An Introduction to Differential Algebra

Interval unions. B Tiago Montanher. Hermann Schichl 1 Ferenc Domes 1 Tiago Montanher 1 Kevin Kofler 1

Lecture 18: Optimization Programming

Lecture V. Numerical Optimization

Interval Extensions of Non-Smooth Functions for Global Optimization and Nonlinear Systems Solvers

Taylor Model Range Bounding Schemes

Deterministic Global Optimization for Dynamic Systems Using Interval Analysis

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

Constraint propagation for univariate quadratic constraints

Chapter 2. Optimization. Gradients, convexity, and ALS

Lecture 7: Convex Optimizations

MATH 2053 Calculus I Review for the Final Exam

A New Inclusion Function for Optimization: Kite The One Dimensional Case

Algorithms for nonlinear programming problems II

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization

Nonlinear Symbolic Transformations for Simplifying Optimization Problems

REGULAR-SS12: The Bernstein branch-and-prune algorithm for constrained global optimization of multivariate polynomial MINLPs

Optimization Tutorial 1. Basic Gradient Descent

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams

Computational tools for reliable global optimization. Charlie Vanaret, Institut de Recherche en Informatique de Toulouse

Absolute Value Programming

MATH2070 Optimisation

Total Least Squares and Chebyshev Norm

4TE3/6TE3. Algorithms for. Continuous Optimization

Interval Arithmetic An Elementary Introduction and Successful Applications

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Function

Gradient Descent. Dr. Xiaowei Huang

ELA INTERVAL SYSTEM OF MATRIX EQUATIONS WITH TWO UNKNOWN MATRICES

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

4. We accept without proofs that the following functions are differentiable: (e x ) = e x, sin x = cos x, cos x = sin x, log (x) = 1 sin x

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

Linear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015

Interior-Point Methods for Linear Optimization

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Written Examination

A New Monotonicity-Based Interval Extension Using Occurrence Grouping

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Deterministic Global Optimization of Molecular Structures Using Interval Analysis

Reliable Modeling Using Interval Analysis: Chemical Engineering Applications

Second Order Optimization Algorithms I

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Some Observations on Exclusion Regions in Branch and Bound Algorithms

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

5.5 Quadratic programming

Constrained global optimization. Arnold Neumaier University of Vienna Vienna, Austria

PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP p.1/22

Transcription:

Solving Global Optimization Problems by Interval Computation Milan Hladík Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic, http://kam.mff.cuni.cz/~hladik/ Seminar on Computational Aspects of Optimisation April 27, 2015 1/32

Example One of the Rastrigin functions. 2/32

Global Optimization Questions Questions Can we find global minimum? Can we prove that the found solution is optimal? Can we prove uniqueness? Answer Yes (under certain assumption) by using Interval Computations. 3/32

Interval Computations Notation An interval matrix The center and radius matrices A := [A,A] = {A R m n A A A}. A c := 1 2 (A+A), A := 1 2 (A A). The set of all m n interval matrices: IR m n. Main Problem Let f : R n R m and x IR n. Determine the image f(x) = {f(x): x x}, or at least its tight interval enclosure. 4/32

Interval Arithmetic Interval Arithmetic (proper rounding used when implemented) For arithmetical operations (+,,, ), their images are readily computed a+b = [a+b,a+b], a b = [a b,a b], a b = [min(ab,ab,ab,ab),max(ab,ab,ab,ab)], a b = [min(a b,a b,a b,a b),max(a b,a b,a b,a b)]. Some basic functions x 2, exp(x), sin(x),..., too. Can we evaluate every arithmetical expression on intervals? Yes, but with overestimation in general due to dependencies. Example (Evaluate f(x) = x 2 x on x = [ 1,2]) x 2 x = [ 1,2] 2 [ 1,2] = [ 2,5], x(x 1) = [ 1,2]([ 1,2] 1) = [ 4,2], (x 1 2 )2 1 4 = ([ 1,2] 1 2 )2 1 4 = [ 1 4,2]. 5/32

Mean value form Theorem Let f : R n R, x IR n and a x. Then f(x) f(a)+ f(x) T (x a), Proof. By the mean value theorem, for any x x there is c x such that Improvements f(x) = f(a)+ f(c) T (x a) f(a)+ f(x) T (x a). successive mean value form f(x) f(a)+f x 1 (x 1,a 2,...,a n )(x 1 a 1 ) +f x 2 (x 1,x 2,a 3...,a n )(x 2 a 2 )+... +f x n (x 1,...,x n 1,x n )(x n a n ). replace derivatives by slopes 6/32

Interval Linear Equations Interval linear equations Let A IR m n and b IR m. The family of systems Ax = b, A A, b b. is called interval linear equations and abbreviated as Ax = b. Solution set The solution set is defined Σ := {x R n : A A b b : Ax = b}. Theorem (Oettli Prager, 1964) The solution set Σ is a non-convex polyhedral set described by A c x b c A x +b. 7/32

Interval Linear Equations Example (Barth & Nuding, 1974)) ( [2,4] [ 2,1] [ 1, 2] [2, 4] )( x1 x 2 ) = ( ) [ 2,2] [ 2, 2] x 2 4 3 2 1 4 3 2 1 1 2 3 4 x 1 1 2 3 4 8/32

Interval Linear Equations Enclosure Since Σ is hard to determine and deal with, we seek for enclosures x IR n such that Σ x. Many methods for enclosures exists, usually employ preconditioning. Preconditioning (Hansen, 1965) Let R R n n. The preconditioned system of equations: (RA)x = Rb. Remark the solution set of the preconditioned systems contains Σ usually, we use R (A c ) 1 then we can compute the best enclosure (Hansen, 1992, Bliek, 1992, Rohn, 1993) 9/32

Interval Linear Equations Example (Barth & Nuding, 1974)) ( [2,4] [ 2,1] [ 1, 2] [2, 4] x 2 14 )( x1 x 2 ) = ( ) [ 2,2] [ 2, 2] 7 14 7 7 14x 1 7 14 10/32

Interval Linear Equations Example (typical case) ( [6,7] [2,3] [1,2] [4,5] )( x1 x 2 ) = ( ) [6,8] [7,9] x 2 2.5 0.5 0.5 1.0 x 1 1.5 11/32

Nonlinear Equations System of nonlinear equations Let f : R n R n. Solve where x IR n is an initial box. f(x) = 0, x x, Interval Newton method (Moore, 1966) letting x 0 x, the Interval Newton operator reads N(x) := x 0 f(x) 1 f(x 0 ) N(x) is computed from interval linear equations f(x) ( x 0 N(x) ) = f(x 0 ). iterations: x := x N(x) fast (loc. quadratically convergent) and rigorous (omits no root in x) if N(x) intx, then there is a unique root in x 12/32

Interval Newton method Example y 1.0 f(x) = x 3 x +0.2 0.5 2.0 1.5 1.0 0.5 0.5 0.5 1.0 1.5 x 1.0 In six iterations precision 10 11 (quadratic convergence). 13/32

Eigenvalues of Interval Matrices Eigenvalues For A R n n, A = A T, denote its eigenvalues λ 1 (A) λ n (A). Let for A IR n n, denote its eigenvalue sets λ i (A) = {λ i (A): A A, A = A T }, i = 1,...,n. Theorem Checking whether 0 λ i (A) for some i = 1,...,n is NP-hard. We have the following enclosures for the eigenvalue sets λ i (A) [λ i (A c ) ρ(a ),λ i (A c )+ρ(a )], i = 1,...,n. By Hertz (1992) λ 1 (A) = max z {±1} nλ 1(A c +diag(z)a diag(z)), λ n (A) = min z {±1} nλ n(a c diag(z)a diag(z)). 14/32

Global Optimization Global optimization problem Compute global (not just local!) optima to minf(x) subject to g(x) 0, h(x) = 0, x x 0, where x 0 IR n is an initial box. Theorem (Zhu, 2005) There is no algorithm solving global optimization problems using operations +,, sin. Proof. From Matiyasevich s theorem solving the 10th Hilbert problem. Remark Using the arithmetical operations only, the problem is decidable by Tarski s theorem (1951). 15/32

Global Optimization by Interval Techniques Basic idea Split the initial box x 0 into sub-boxes. If a sub-box does not contain an optimal solution, remove it. Otherwise split it into sub-boxes and repeat. 16/32

Motivating Example System Solving Example x 2 +y 2 16, x 2 +y 2 9 Figure: Exact solution set Figure: Subpaving approximation 17/32

Motivating Example System Solving Example (thanks to Elif Garajová) ε = 1.0 time: 0.952 s ε = 0.5 time: 2.224 s ε = 0.125 time: 9.966 s 18/32

Interval Approach to Global Optimization Branch & prune scheme 1: L := {x 0 }, [set of boxes] 2: c :=, [upper bound on the minimal value] 3: while L = do 4: choose x L and remove x from L, 5: contract x, 6: find a feasible point x x and update c, 7: if max i xi > ε then 8: split x into sub-boxes and put them into L, 9: else 10: give x to the output boxes, 11: end if 12: end while It is a rigorous method to enclose all global minima in a set of boxes. 19/32

Box Selection Which box to choose? the oldest one the one with the largest edge, i.e., for which max i x i is maximal the one with minimal f(x). 20/32

Division Directions How to divide the box? 1 Take the widest edge of x, that is k := arg max i=1,...,n x i. 2 (Walster, 1992) Choose a coordinate in which f varies possibly mostly k := arg max f x i=1,...,n i (x) xi. 3 (Ratz, 1992) It is similar to the previous one, but uses Remarks k := arg max (f x i=1,...,n i (x)x i ). by Ratschek & Rokne (2009) there is no best strategy for splitting combine several of them the splitting strategy influences the overall performance 21/32

Contracting and Pruning Aim Shrink x to a smaller box (or completely remove) such that no global minimum is removed. Simple techniques if 0 h i (x) for some i, then remove x if 0 < g j (x) for some j, then remove x if 0 < f x i (x) for some i, then fix x i := x i if 0 > f x i (x) for some i, then fix x i := x i Optimality conditions employ the Fritz John (or the Karush Kuhn Tucker) conditions u 0 f(x)+u T h(x)+v T g(x) = 0, h(x) = 0, v l g l (x) = 0 l, (u 0,u,v) = 1. solve by the Interval Newton method 22/32

Contracting and Pruning Inside the feasible region Suppose there are no equality constraints and g j (x) < 0 j. (monotonicity test) if 0 f x i (x) for some i, then remove x apply the Interval Newton method to the additional constraint f(x) = 0 (nonconvexity test) if the interval Hessian 2 f(x) contains no positive semidefinite matrix, then remove x 23/32

Contracting and Pruning Constraint propagation Iteratively reduce domains for variables such that no feasible solution is removed by handling the relations and the domains. Example Consider the constraint eliminate x x +yz = 7, x [0,3], y [3,5], z [2,4] x = 7 yz 7 [3,5][2,4] = [ 13,1] thus, the domain for x is [0,3] [ 13,1] = [0,1] eliminate y y = (7 x)/z (7 [0,1])/[2,4] = [1.5, 3.5] thus, the domain for y is [3,5] [1.5, 3.5] = [3, 3.5] 24/32

Feasibility Test Aim Find a feasible point x, and update c := min(c,f(x )). if no equality constraints, take e.g. x := x c if k equality constraints, fix n k variables x i := x c i and solve system of equations by the interval Newton method if k = 1, fix the variables corresponding to the smallest absolute values in h(x c ) h(x) = 0 x x c h(x c ) 25/32

Feasibility Test Aim Find a feasible point x, and update c := min(c,f(x )). if no equality constraints, take e.g. x := x c if k equality constraints, fix n k variables x i := x c i and solve system of equations by the interval Newton method if k = 1, fix the variables corresponding to the smallest absolute values in h(x c ) in general, if k > 1, transform the matrix h(x c ) to a row echelon form by using a complete pivoting, and fix components corresponding to the right most columns we can include f(x) c to the constraints 26/32

Lower Bounds Aim Given a box x IR n, determine a lower bound to f(x). Why? if f(x) > c, we can remove x minimum over all boxes gives a lower bound on the optimal value Methods interval arithmetic mean value form Lipschitz constant approach αbb algorithm... 27/32

Lower Bounds: αbb algorithm Special cases: bilinear terms For every y y IR and z z IR we have yz max{yz +zy yz, yz +zy yz}. αbb algorithm (Androulakis, Maranas & Floudas, 1995) Consider an underestimator g(x) f(x) in the form g(x) := f(x)+α(x x) T (x x), where α 0. We want g(x) to be convex to easily determine g(x) f(x). In order that g(x) is convex, its Hessian 2 g(x) = 2 f(x)+2αi n must be positive semidefinite on x x. Thus we put α := 1 2 λ min ( 2 f(x)). 28/32

Illustration of a Convex Underestimator 200 100 0 100 200 300 3 2 1 0 1 2 3 3 2 1 0 1 2 3 Function f(x) and its convex underestimator g(x). 29/32

Examples Example (The COPRIN examples, 2007, precision 10 6 ) tf12 (origin: COCONUT, solutions: 1, computation time: 60 s) min x 1 + 1 2 x 2 + 1 3 x 3 s.t. x 1 i m x 2 ( i m )2 x 3 +tan( i m ) 0, i = 1,...,m (m = 101). o32 (origin: COCONUT, solutions: 1, computation time: 2.04 s) min 37.293239x 1 +0.8356891x 5 x 1 +5.3578547x 2 3 40792.141 s.t. 0.0022053x 3x 5 +0.0056858x 2x 5 +0.0006262x 1x 4 6.665593 0, 0.0022053x 3x 5 0.0056858x 2x 5 0.0006262x 1x 4 85.334407 0, 0.0071317x 2x 5 +0.0021813x 2 3 +0.0029955x 1x 2 29.48751 0, 0.0071317x 2x 5 0.0021813x 2 3 0.0029955x 1x 2 +9.48751 0, 0.0047026x 3x 5 +0.0019085x 3x 4 +0.0012547x 1x 3 15.699039 0, 0.0047026x 3x 5 0.0019085x 3x 4 0.0012547x 1x 3 +10.699039 0. Rastrigin (origin: Myatt (2004), solutions: 1 (approx.), time: 2.07 s) min 10n+ n j=1 (x j 1) 2 10cos(2π(x j 1)) where n = 10, x j [ 5.12,5.12]. 30/32

References C. A. Floudas and P. M. Pardalos, editors. Encyclopedia of optimization. 2nd ed. Springer, New York, 2009. E. R. Hansen and G. W. Walster. Global optimization using interval analysis. Marcel Dekker, New York, second edition, 2004. R. B. Kearfott. Rigorous Global Search: Continuous Problems, volume 13 of Nonconvex Optimization and Its Applications. Kluwer, Dordrecht, 1996. A. Neumaier. Complete search in continuous global optimization and constraint satisfaction. Acta Numerica, 13:271 369, 2004. 31/32

Rigorous Global Optimization Software GlobSol (by R. Baker Kearfott), written in Fortran 95, open-source exist conversions from AMPL and GAMS representations, http://interval.louisiana.edu/ Alias (by Jean-Pierre Merlet, COPRIN team), A C++ library for system solving, with Maple interface, http://www-sop.inria.fr/coprin/logiciels/alias/ COCONUT Environment, open-source C++ classes http://www.mat.univie.ac.at/~coconut/coconut-environment/ GLOBAL (by Tibor Csendes), for Matlab/ Intlab, free for academic http://www.inf.u-szeged.hu/~csendes/linkek_en.html PROFIL/BIAS (by O. Knüppel et al.), free C++ class http://www.ti3.tu-harburg.de/software/profilenglisch.html See also C.A. Floudas (http://titan.princeton.edu/tools/) A. Neumaier (http://www.mat.univie.ac.at/~neum/glopt.html) 32/32