The Ellipsoid Algorithm

Similar documents
Math Models of OR: Some Definitions

Complexity of linear programming: outline

7. Lecture notes on the ellipsoid algorithm

The Ellipsoid (Kachiyan) Method

Lecture 15: October 15

Lecture notes on the ellipsoid algorithm

LP Relaxations of Mixed Integer Programs

Math Models of OR: Extreme Points and Basic Feasible Solutions

Math Models of OR: Branch-and-Bound

Math Models of OR: Handling Upper Bounds in Simplex

CS675: Convex and Combinatorial Optimization Spring 2018 The Ellipsoid Algorithm. Instructor: Shaddin Dughmi

15.081J/6.251J Introduction to Mathematical Programming. Lecture 18: The Ellipsoid method

Math Models of OR: Sensitivity Analysis

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

Week 8. 1 LP is easy: the Ellipsoid Method

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0

Integer and Combinatorial Optimization: Introduction

Topics in Theoretical Computer Science: An Algorithmist's Toolkit Fall 2007

Optimization Methods in Management Science

LINEAR PROGRAMMING III

CO 250 Final Exam Guide

Polynomiality of Linear Programming

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Convex Geometry. Carsten Schütt

CS Algorithms and Complexity

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

CSC Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm

Integer and Combinatorial Optimization: The Cut Generation LP

Integer Programming ISE 418. Lecture 13. Dr. Ted Ralphs

IE 5531: Engineering Optimization I

Lecture 1: Convex Sets January 23

Topics in Theoretical Computer Science April 08, Lecture 8

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

A Review of Linear Programming

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Math Models of OR: The Revised Simplex Method

AN INTRODUCTION TO CONVEXITY

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Introduction to Proofs

Linear Programming. Chapter Introduction

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Review Solutions, Exam 2, Operations Research

Travelling Salesman Problem

Math 421, Homework #9 Solutions

Convex Sets. Prof. Dan A. Simovici UMB

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

COMP3121/9101/3821/9801 Lecture Notes. Linear Programming

Algorithmic Convex Geometry

The simplex algorithm

IE 521 Convex Optimization

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

Lecture 25 of 42. PAC Learning, VC Dimension, and Mistake Bounds

Lecture #21. c T x Ax b. maximize subject to

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

15-780: LinearProgramming

Two Lectures on the Ellipsoid Method for Solving Linear Programs

Lecture 1: Background on Convex Analysis

Integer Linear Programs

Convex optimization COMS 4771

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Combinatorial Optimization

Name (print): Question 4. exercise 1.24 (compute the union, then the intersection of two sets)

47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 2 Date: 03/18/2010

Succinct linear programs for easy problems

Preliminaries and Complexity Theory

Math Models of OR: Traveling Salesman Problem

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Primal/Dual Decomposition Methods

Separation Techniques for Constrained Nonlinear 0 1 Programming

2.4 Hilbert Spaces. Outline

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Combinatorial Optimization

6.854J / J Advanced Algorithms Fall 2008

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

1 Column Generation and the Cutting Stock Problem

Lecture 8. Strong Duality Results. September 22, 2008

Introduction to Real Analysis Alternative Chapter 1

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2003

March 25, 2010 CHAPTER 2: LIMITS AND CONTINUITY OF FUNCTIONS IN EUCLIDEAN SPACE

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

The Simplex Algorithm: Technicalities 1

9. Geometric problems

Iterative Rounding and Relaxation

8. Geometric problems

MAT-INF4110/MAT-INF9110 Mathematical optimization

3. Linear Programming and Polyhedral Combinatorics

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Lecture 5: Computational Complexity

Optimization methods NOPT048

23. Cutting planes and branch & bound

Transcription:

The Ellipsoid Algorithm John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA 9 February 2018 Mitchell The Ellipsoid Algorithm 1 / 28

Introduction Outline 1 Introduction 2 Assumptions 3 Each iteration 4 Complexity analysis 5 Separation and optimization 6 Relevance to integer optimization 7 N P-Hard problems Mitchell The Ellipsoid Algorithm 2 / 28

Introduction Introduction The simplex algorithm requires exponentially many iterations in the worst case for known pivot rules. In contrast, the ellipsoid algorithm requires a polynomial number of iterations. The ellipsoid algorithm was developed in the 1970s in the Soviet Union, building on earlier Soviet work on nonlinear programming. Mitchell The Ellipsoid Algorithm 3 / 28

Introduction New York Times, November 7, 1979 Mitchell The Ellipsoid Algorithm 4 / 28

Introduction New York Times, November 7, 1979 Published: November 7, 1979 Copyright The New York Times Mitchell The Ellipsoid Algorithm 5 / 28

Introduction New York Times, November 7, 1979 Published: November 7, 1979 Copyright The New York Times Mitchell The Ellipsoid Algorithm 6 / 28

Find a feasible solution Introduction The algorithm is a high-dimensional version of binary search. It can be stated as an algorithm to determine whether a feasible solution exists to a system of inequalities: Does there exist x 2 IR n satisfying a T i x apple c i for i = 1,...,m? We let P := {x 2 IR n : a T i x apple c i for i = 1,...,m}. Mitchell The Ellipsoid Algorithm 7 / 28

Assumptions Outline 1 Introduction 2 Assumptions 3 Each iteration 4 Complexity analysis 5 Separation and optimization 6 Relevance to integer optimization 7 N P-Hard problems Mitchell The Ellipsoid Algorithm 8 / 28

Assumptions Assumptions 1%4 We need to make two assumptions: Assumption 1: If P is nonempty then it includes points contained in a ball of radius R centered at the origin. Assumption 1 can be justified through the use of Cramer s Rule, which lets us bound the norm of any extreme point of P. Mitchell The Ellipsoid Algorithm 9 / 28

Assumption 2 Assumptions radius r X p Assumption 2: If P is nonempty then it contains a ball of radius r. Assumption 2 is like a nondegeneracy assumption. Cramer s Rule also lets us place a lower bound on the distance between two extreme points. Technically, we can always make Assumption 2, because the size of the numbers in the parameters can be used to define another parameter and then each c i can be increased by in such a way that the new system has a solution if and only if the old system has a solution, and the new system contains a ball of radius r if it is nonempty. Mitchell The Ellipsoid Algorithm 10 / 28

Assumptions Assumption 2 Assumption 2: If P is nonempty then it contains a ball of radius r. Assumption 2 is like a nondegeneracy assumption. Cramer s Rule also lets us place a lower bound on the distance between two extreme points. Technically, we can always make Assumption 2, because the size of the numbers in the parameters can be used to define another parameter and then each c i can be increased by in such a way that the new system has a solution if and only if the old system has a solution, and the new system contains a ball of radius r if it is nonempty. Mitchell The Ellipsoid Algorithm 10 / 28

Assumptions Assumption 2 Assumption 2: If P is nonempty then it contains a ball of radius r. Assumption 2 is like a nondegeneracy assumption. Cramer s Rule also lets us place a lower bound on the distance between two extreme points. Technically, we can always make Assumption 2, because the size of the numbers in the parameters can be used to define another parameter and then each c i can be increased by in such a way that the new system has a solution if and only if the old system has a solution, and the new system contains a ball of radius r if it is nonempty. Mitchell The Ellipsoid Algorithm 10 / 28

Each iteration Outline 1 Introduction 2 Assumptions 3 Each iteration 4 Complexity analysis 5 Separation and optimization 6 Relevance to integer optimization 7 N P-Hard problems Mitchell The Ellipsoid Algorithm 11 / 28

Each iteration Each iteration At each iteration we have an ellipsoid which contains (part of) P, if P is nonempty. We check the center x of the ellipsoid: does it satisfy If YES: we are done. a T i x apple c i for i = 1,...,m? Otherwise, we have a violated constraint a T i x apple c j. We define a new ellipsoid that contains the intersection of the old ellipsoid with the halfspace {x 2 IR n : ai T x apple ai T x}. If n = 1 then this would divide our search space in half. In IR n, we don t get such a good improvement. Nonetheless, the size of the ellipsoid shrinks enough that we get convergence in a number of iterations polynomial in n. Mitchell The Ellipsoid Algorithm 12 / 28

Each iteration Illustration: trying to find a point in P P x k E k Mitchell The Ellipsoid Algorithm 13 / 28

Each iteration Illustration: trying to find a point in P P x k E k a T i x = a T i x k Mitchell The Ellipsoid Algorithm 13 / 28

Each iteration Illustration: trying to find a point in P E "= {xer":(x-x" TY! (x-xmei} A P x k+1 x k positive definite, symmetric E k Volume: proportional todeft(mu) E k+1 a T i x = a T i x k Mitchell The Ellipsoid Algorithm 13 / 28

Each iteration Illustration: trying to find a point in P P x k+1 E k+1 Mitchell The Ellipsoid Algorithm 13 / 28

Each iteration Complexity analysis Initial ellipsoid: Ball of radius R centered at the origin. How many Noes are enough? If the ellipsoid is too small to contain a ball of radius r then P = ;. The algorithm updates from an ellipsoid E to an ellipsoid E 0. It can be shown that the volumes of these ellipsoids are related as: Vol(E 0 ) apple e 1/(2(n+1)) Vol(E). Mitchell The Ellipsoid Algorithm 14 / 28

Complexity analysis Outline 1 Introduction 2 Assumptions 3 Each iteration 4 Complexity analysis 5 Separation and optimization 6 Relevance to integer optimization 7 N P-Hard problems Mitchell The Ellipsoid Algorithm 15 / 28

How many iterations? Complexity analysis How many Noes to go from a ball of radius R to a ball of radius r? Let v and V denote the volumes of balls of radius r and R, respectively. Let E 0 be the initial ellipsoid, let E k be the ellipsoid after k iterations. We want the smallest value of k so that Vol(E k ) apple v. We also have Vol(E k ) apple h e 1/(2(n+1))i k V = e k/(2(n+1)) V. Thus it suffices to choose k sufficiently large that e k/(2(n+1)) V apple v. Mitchell The Ellipsoid Algorithm 16 / 28

Complexity analysis How many iterations, part 2 It suffices to choose k sufficiently large that e k/(2(n+1)) V apple v. Equivalently, after dividing by V and taking logs of both sides, we require v k/(2(n + 1)) apple ln, V which can be restated as requiring k 2(n + 1) ln V. v The ratio V /v can be chosen so that ln(v /v) is O(n 3 ), so we have an algorithm that requires O(n 4 ) iterations. Mitchell The Ellipsoid Algorithm 17 / 28

Work per iteration Complexity analysis In exact arithmetic, would need to work with square roots. Can show that it is OK to approximate the square roots and still have the volume of the ellipsoid decrease at almost the same rate. Mitchell The Ellipsoid Algorithm 18 / 28

Complexity analysis Finding optimal solutions to linear programs If feasible, cut on the objective function. So restrict attention to the halfspace of points at least as good as the current center. Mitchell The Ellipsoid Algorithm 19 / 28

Separation and optimization Outline 1 Introduction 2 Assumptions 3 Each iteration 4 Complexity analysis 5 Separation and optimization 6 Relevance to integer optimization 7 N P-Hard problems Mitchell The Ellipsoid Algorithm 20 / 28

Separation and optimization Convex feasibility problems = a Tx< c I a t >a C Definition Given a convex set C IR n and a point x 2 IR n, the separation problem is to determine whether x 2 C. If the answer is NO, a separating hyperplane a T x = c is determined, where a 2 IR n,c2 IR, a T x apple c for all x 2 C, and a T x > c. Mitchell The Ellipsoid Algorithm 21 / 28

Separation and optimization Equivalence of separation and feasibilty Theorem Assume the convex set C contains a ball of radius r. If the separation problem for C can be solved in polynomial time then the feasibility problem for C can also be solved in polynomial time. Proof. We use the ellipsoid algorithm. At each iteration, we test whether the center x k of the current ellipsoid is in C. If it is not then the separation problem solution returns a valid constraint for C which we can use to update the ellipsoid. The overall complexity is the product of two polynomial functions, which is polynomial. Mitchell The Ellipsoid Algorithm 22 / 28

Separation and optimization Equivalence of separation and feasibilty Theorem Assume the convex set C contains a ball of radius r. If the separation problem for C can be solved in polynomial time then the feasibility problem for C can also be solved in polynomial time. Proof. We use the ellipsoid algorithm. At each iteration, we test whether the center x k of the current ellipsoid is in C. If it is not then the separation problem solution returns a valid constraint for C which we can use to update the ellipsoid. The overall complexity is the product of two polynomial functions, which is polynomial. Mitchell The Ellipsoid Algorithm 22 / 28

" i

Separation and optimization Equivalence of separation and optimization Consider the optimization problem min x2ir n c T x subject to x 2 C IR n (CP) where C is a convex set and c 2 IR n. Theorem The problem (CP) can be solved in polynomial time if and only if the separation problem for C can be solved in polynomial time. This result follows from using the ellipsoid algorithm. We can optimize over C using the ellipsoid algorithm, generating separating hyperplanes either: by solving the separation problem to separate the current center of the ellipsoid from C, or by cutting on the objective function if x 2 C. Mitchell The Ellipsoid Algorithm 23 / 28

Separation and optimization Equivalence of separation and optimization Consider the optimization problem min x2ir n c T x subject to x 2 C IR n (CP) where C is a convex set and c 2 IR n. Theorem The problem (CP) can be solved in polynomial time if and only if the separation problem for C can be solved in polynomial time. This result follows from using the ellipsoid algorithm. We can optimize over C using the ellipsoid algorithm, generating separating hyperplanes either: by solving the separation problem to separate the current center of the ellipsoid from C, or by cutting on the objective function if x 2 C. Mitchell The Ellipsoid Algorithm 23 / 28

Separation and optimization Equivalence of separation and optimization Consider the optimization problem min x2ir n c T x subject to x 2 C IR n (CP) where C is a convex set and c 2 IR n. Theorem The problem (CP) can be solved in polynomial time if and only if the separation problem for C can be solved in polynomial time. This result follows from using the ellipsoid algorithm. We can optimize over C using the ellipsoid algorithm, generating separating hyperplanes either: by solving the separation problem to separate the current center of the ellipsoid from C, or by cutting on the objective function if x 2 C. Mitchell The Ellipsoid Algorithm 23 / 28

±. Ex<<7 4

Relevance to integer optimization Outline 1 Introduction 2 Assumptions 3 Each iteration 4 Complexity analysis 5 Separation and optimization 6 Relevance to integer optimization 7 N P-Hard problems Mitchell The Ellipsoid Algorithm 24 / 28

Relevance to integer optimization Integer optimization as a linear program Consider the integer optimization problem min x c T x subject to x 2 S (IP) x 2 Z n IR n for some subset S, and where Z denotes the set of integers. Let C denote the convex hull of the set of integer points in S, so C = conv(s \ Z n ) (The convex hull of a set T is the smallest convex set containing T.A formal definition will be given next week.) All the extreme points of C are feasible integer points, so we could in principle solve (IP) by solving the linear program min x c T x subject to x 2 C IR n (LP). Mitchell The Ellipsoid Algorithm 25 / 28

%t I.

Relevance to integer optimization Solving the linear program The difficulty with this approach is that it is very hard to get an explicit formulation for C. If we can solve the separation problem for C in polynomial time then we can solve (LP) in polynomial time, which means we can solve (IP) in polynomial time. Thus, for N P-complete problems, we cannot expect to solve the separation problem in polynomial time. Conversely, if we can solve the optimization problem in polynomial time then we can solve the separation problem in polynomial time. For example, we can solve the problem of finding a minimum weight perfect matching on a graph G =(V, E) in polynomial time, and indeed the separation problem of determining whether a point x is in the convex hull of the set of perfect matchings can be solved in O( V 4 ) time. Mitchell The Ellipsoid Algorithm 26 / 28

N P-Hard problems Outline 1 Introduction 2 Assumptions 3 Each iteration 4 Complexity analysis 5 Separation and optimization 6 Relevance to integer optimization 7 N P-Hard problems Mitchell The Ellipsoid Algorithm 27 / 28

N P-hard problems N P-Hard problems A problem is called N P-hard if it at least as hard as an N P-complete problem, in the sense that the N P-complete problem can be polynomially reduced to the N P-hard problem. An N P-hard problem need not be a feasibility problem. In particular, optimization versions of N P-complete integer programming feasibility problems are N P-hard. Mitchell The Ellipsoid Algorithm 28 / 28