Computational Geometry Lecture Linear Programming

Similar documents
Linear Programming and Halfplane Intersection

Optimization (168) Lecture 7-8-9

Linear Classification: Linear Programming

Linear Programming. Chapter Introduction

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Linear Classification: Linear Programming

TIM 206 Lecture 3: The Simplex Method

IE 400 Principles of Engineering Management. Graphical Solution of 2-variable LP Problems

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

1 Seidel s LP algorithm

IE 5531: Engineering Optimization I

ENGI 5708 Design of Civil Engineering Systems

Theory and Internet Protocols

Week 5: Quicksort, Lower bound, Greedy

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

3. Linear Programming and Polyhedral Combinatorics

Lecture slides by Kevin Wayne

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Operations Research Graphical Solution. Dr. Özgür Kabak

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Chapter 3, Operations Research (OR)

The simplex algorithm

IE 5531: Engineering Optimization I

C&O 355 Mathematical Programming Fall 2010 Lecture 2. N. Harvey

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

1 Review Session. 1.1 Lecture 2

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

9.1 Linear Programs in canonical form

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

Linear Programming Duality

MATH2070 Optimisation

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

Lecture 1 Introduction

Chapter 3 Introduction to Linear Programming PART 1. Assoc. Prof. Dr. Arslan M. Örnek

IE 5531: Engineering Optimization I

CO 250 Final Exam Guide

Lecture 10: Linear programming duality and sensitivity 0-0

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Multicommodity Flows and Column Generation

Optimization. Broadly two types: Unconstrained and Constrained optimizations We deal with constrained optimization. General form:

Introduction to optimization

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

"SYMMETRIC" PRIMAL-DUAL PAIR

Lecture 6 Simplex method for linear programming

Discrete Optimization

February 17, Simplex Method Continued

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

Today: Linear Programming (con t.)

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Introduction to Mathematical Programming

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Part 1. The Review of Linear Programming Introduction

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Topics in Theoretical Computer Science April 08, Lecture 8

MS-E2140. Lecture 1. (course book chapters )

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Lecture 5 Simplex Method. September 2, 2009

Thursday, May 24, Linear Programming

Special cases of linear programming

A strongly polynomial algorithm for linear systems having a binary solution

Simplex Method for LP (II)

Solution Cases: 1. Unique Optimal Solution Reddy Mikks Example Diet Problem

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

MS-E2140. Lecture 1. (course book chapters )

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

ENM 202 OPERATIONS RESEARCH (I) OR (I) 2 LECTURE NOTES. Solution Cases:

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Summary of the simplex method

MATH 4211/6211 Optimization Linear Programming

An introductory example

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

LINEAR PROGRAMMING II

Intro to Theory of Computation

Review Solutions, Exam 2, Operations Research

Linear Programming. 1 An Introduction to Linear Programming

Chapter 1 Linear Programming. Paragraph 5 Duality

Polynomiality of Linear Programming

Quicksort. Where Average and Worst Case Differ. S.V. N. (vishy) Vishwanathan. University of California, Santa Cruz

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

III. Linear Programming

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

The Simplex Algorithm and Goal Programming

AM 121: Intro to Optimization

Termination, Cycling, and Degeneracy

Optimisation and Operations Research

CS711008Z Algorithm Design and Analysis

7. Lecture notes on the ellipsoid algorithm

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Lecture 5: Computational Complexity

Cutting Plane Methods II

2. Linear Programming Problem

Transcription:

Computational Geometry Lecture Linear Programming INSTITUTE FOR THEORETICAL INFORMATICS FACULTY OF INFORMATICS Tamara Mchedlidze Darren Strash 09.11.2015 1

Profit optimization You are the boss of a company, that produces two products P 1 und P 2 from three raw materials R 1, R 2 und R 3. Let s assume you produce x 1 items of the product P 1 and x 2 items of product P 2. Assume that items P 1, P 2 get profit of 300e and 500e, respectively. Then the total profit is: G(x 1, x 2 ) = 300x 1 + 500x 2 Assume that the amout of raw material you need for P 1 and P 2 is: P 1 : 4R 1 + R 2 P 2 : 11R 1 + R 2 + R 3 And in your warehouse there are 880R 1, 150R 2 and 60R 3. So: R 1 : 4x 1 + 11x 2 880 R 2 : x 1 + x 2 150 R 3 : x 2 60 Which choice for (x 1, x 2 ) maximizes your profit? 2

Solution x 2 150 53.000 e 45.000 e 100 30.000 e 3 50 15.000 e 0 0 (300, 500) Linear constraints: R 1 : 4x 1 + 11x 2 880 R 2 : x 1 + x 2 150 R 3 : x 2 60 x 1 0 x 2 0 Linear objective function: G(x 1, x 2 ) = 300x 1 + 500x 2 = (300, 500) ( x 1 ) x 2 G(110, 40) = = 53.000 50 100 150 200 Isocost line (orthogonal to ( 300 500) ) x 1 Ax b x 0 max c T x c - normal vector maximum value of the objective function under the constraints. max{c T x Ax b, x 0}

Linear programming Definition: Given a set of linear constraints H and a linear objective function c in R d, a linear program (LP) is formulated as follows: maximize c 1 x 1 + c 2 x 2 + + c d x d under constr. a 1,1 x 1 + + a 1,d x d b 1 a 2,1 x 1 + + a 2,d x d b 2 a n,1 x 1 + + a n,d x d b n. } H H is a set of half-spaces in R d. We are searching for a point x h H h, that maximizes c T x, i.e. max{c T x Ax b, x 0}. Linear programming is a central method in operations research. 4

Algorithms for LPs There are many algorithms to solve LPs: Simplex-Algorithm [Dantzig, 1947] Ellipsoid-Method [Khatchiyan, 1979] Interior-Point-Method [Karmarkar, 1979] They work well in practice, especially for large values of n (number of constraints) and d (number of variables). Today: Special case d = 2 Possibilities for the solution space 5 H = infeasible c H is unbounded in the direction c feasible region H is bounded c solution is not unique c unique solution

First approach Idea: Compute the feasible region H and search for the angle p, that maximizes c T p. The half-planes are convex Let s try a simple Divide-and-Conquer Algorithm IntersectHalfplanes(H) if H = 1 then C H else (H 1, H 2 ) SplitInHalves(H) C 1 IntersectHalfplanes(H 1 ) C 2 IntersectHalfplanes(H 2 ) C IntersectConvexRegions(C 1, C 2 ) return C 6

Intersect convex regions IntersectConvexRegions(C 1, C 2 ) can be implemented using a sweep line method: consider the left and the right boundaries of C 1 and C 2 move the sweep line l from top to bottom and save the crossed edges ( 4) The nodes of C 1 C 2 define events. We process every event in O(1) time, dependent on the type of the edges incident to the event vertex. l C 1 C 2 Theorem 1: The intersection of two convex polygonal regions in the plane with n 1 + n 2 = n nodes can be computed in O(n) time. 7

Running time of IntersectHalfplanes(H) IntersectHalfplanes(H) if H = 1 then C H else (H 1, H 2 ) SplitInHalves(H) C 1 IntersectHalfplanes(H 1 ) C 2 IntersectHalfplanes(H 2 ) C IntersectConvexRegions(C 1, C 2 ) return C Task: What is the running time of IntersectHalfplanes(H)? Recursive formula { O(1) when n = 1 T (n) = O(n) + 2T (n/2) when n > 1 Master Theorem T (n) O(n log n) 8-3

Running time of IntersectHalfplanes(H) IntersectHalfplanes(H) if H = 1 then Cfeasible H region H can be found in O(n log n) time else H has O(n) nodes (H the 1, node H 2 ) p that SplitInHalves(H) maximizes c T p can therefore be found Cin 1 O(n IntersectHalfplanes(H log n) time 1 ) C 2 IntersectHalfplanes(H 2 ) C IntersectConvexRegions(C 1, C 2 ) return C Task: What is the running time of IntersectHalfplanes(H)? Recursive formula { O(1) when n = 1 T (n) = O(n) + 2T (n/2) when n > 1 Master Theorem T (n) O(n log n) 8-4

Bounded LPs Idea: Instead of computing the feasible region and then searching for the optimal angle, do this incrementally. Invariant: Current best solution is a unique corner of the current feasible polygon How to deal with the unbounded feasible regions? Define two half-planes for a big enough value M m 1 = { x M if c x > 0 x M otherwise When the optimal point is not unique, select lexicographically smallest one! m 2 = { y M if c y > 0 y M otherwise m 1 m2 c c 9-5

Bounded LPs Idea: Instead of computing the feasible region and then searching for the optimal angle, do this incrementally. Invariant: 9-6 Current best solution is a unique corner of the current feasible polygon How to deal with the unbounded feasible regions? Define two half-planes for a big enough value M m 1 = { x M if c x > 0 x M otherwise When the optimal point is not unique, select lexicographically smallest one! m 2 = { y M if c y > 0 y M otherwise Consider a LP (H, c) with H = {h 1,..., h n }, c = (c x, c y ). We denote the first i constraints by H i = {m 1, m 2, h 1,..., h i }, and the feasible polygon defineed by them by C i = m 1 m 2 h 1 h i

Properties each region C i has a single optimal angle v i it holds that: C 0 C 1 C n = C How the optimal angle v i 1 changes when the half plane h i is added? Lemma 1: For 1 i n and bounding line l i of h i holds that: (i) If v i 1 h i then v i = v i 1, (ii) otherwise, either C i = or v i l i. h 5 h 6 c c c v 4 v 4 = v 5 v 5 v 6 10

One-dimentional LP In case(ii) of Lemma 1, we search for the best point on the segment l i C i 1 : 11 we parametrize l i : y = ax + b define new objective function fc(x) i = c ( ) T x ax+b for j i 1 let σ x (l j, l i ) denote the x-coordinate of l j l i This gives us the following one-dimentional LP: maximize f i c(x) = c x x + c y (ax + b) with constr. x σ x (l j, l i ) if l i h j is limited to the right x σ x (l j, l i ) if l i h j is limited to the left Lemma 2: A one-dimentional LP can be solved in linear time. In particular, in case (ii), one can compute the new angle v i or decide whether C i = in O(i) time.

Incremental Algorithm 2dBoundedLP(H, c, m 1, m 2 ) worst-case running time: C 0 m 1 m 2 T (n) = n i=1 O(i) = O(n2 ) v 0 unique angle of C 0 for i 1 to n do if v i 1 h i then v i v i 1 O(1) else v i 1dBoundedLP(σ(H i 1 ), fc) i if v i = nil then O(i) return infeasible return v n Lemma 3: Algorithmus 2dBoundedLP needs Θ(n 2 ) time to solve an LP with n contraints and 2 variables. 12

What else can we do? Obs.: It is not the half-planes H that force the high running time, but the order in which we consider them. c h 6 h 5 h h 4 3 h 1 h 2 v 2 v 3 v 4 v 5 v 6 How to find (quickly) a good ordering? Zufällig! 13

Randomized incremental algorithm 2dRandomizedBoundedLP(H, c, m 1, m 2 ) C 0 m 1 m 2 v 0 unique angle of C 0 H RandomPermutation(H) for i 1 to n do if v i 1 h i then v i v i 1 else v i 1dBoundedLP(σ(H i 1 ), f i c) if v i = nil then return infeasible return v n 14

Random permutation RandomPermutation(A) Input: Array A[1... n] Output: Array A, rearranged into a random permutation for k n to 2 do r Random(k) Random num. between 1 and k exchange A[r] and A[k] Obs.: The running time of 2dRandomizedBoundedLP depends on the random permutation computed by the procedure RandomPermutation. In the following we compute the expected running time. Theorem 2: A two-dimentional LP with n constraints can be solved in O(n) randomized expected time. 15

Unbounded LPs Till now: Artificial contraints to bound C by m 1 and m 2 Next: recongnize and deal with an unbonded LP c H unbounded in the direction c Def.: A LP (H, c) is called unbounded, if there exists a ray ρ = {p + λd λ > 0} in C = H, such that the value of the objective function f c becomes arbitrarily large along ρ. It must be that: d, c > 0 d, η(h) 0 for all h H where η(h) is the normal vector of h oriented towards the feasible side of h 16

Characterization Lemma 4: A LP (H, c) is unbounded iff there is a vector d R 2 such that d, c > 0 d, η(h) 0 for all h H LP (H, c) with H = {h H d, η(h) = 0} is feasible. Test whether (H, c) is unbounded with a one-dimentional LP: Step 1: rotate coordinate system till c = (0, 1) normalize vector d with d, c > 0 as d = (d x, 1) For normal vector η(h) = (η x, η y ) it should hold that d, η(h) = d x η x + η y 0 Let H = {d x η x + η y 0 h H} Check whether this one-dim. LP H is feasible 17

Test auf Unbeschränktheit Step 2: If Step 1 returns a feasible solution d x compute H = {h H d xη x (h) + η y (h) = 0} Normals to H are orthogonal to d = (d x, 1) lines bounding half-planes of H are parallel to d intersect the bounding lines of H with x-axis 1d-LP If the two steps result in a feasible solution, the LP (H, c) is unbounded and we can construct the ray ρ. If the LP H in Step 2 is infeasible, then then so is the initial LP (H, c). If the LP H of the Step 1 is infeasible, then by Lemma 4, (H, c) is bounded. 18

Certificates of boundness Obs.: When the LP H = {d x η x + η y 0 h H} of the Step 1 is infeasible, we can use this information further! d right x d left x 1d-LP H is infeasible the interval [d left x, d right x ] = let h 1 and h 2 be the half planes corresponding to d left x and d right x There is no vector d that would satisfy h 1 and h 2, thus the LP ({h 1, h 2 }, c) is already bounded h 1 and h 2 are certificates of the boundness use h 1 and h 2 in 2dRandomizedBoundedLP as m 1 and m 2 19

Algorithms 2dRandomizedLP(H, c)? Vector d with d, c > 0 and d, η(h) 0 for all h H if d exists then H {h H d, η(h) = 0} if H feasible then return (ray ρ, unbounded) else return infeasible else (h 1, h 2 ) Certificates for the boundness of (H, c) H H \ {h 1, h 2 } return 2dRandomizedBoundedLP(H, c, h 1, h 2 ) Theorem 3: A two-dimentional LP with n constraints can be solved in O(n) randomized expected time. 20

Discussion Can the two-dimentional algorithms be generalized to more dimentions? Yes! The same way as the two-dimentional LP is solved incrementally with reduction to a one-dimentional LP, a d-dimentional LP can be solved by a randomized incremental algorithm with a reduction to (d 1)-dimentional LP. The expected running time is then O(c d d! n) for a constant c. The algorithm is therefore usefull only for small values on d. The simple randomized incremental algorithm for two and more dimentions given in this lecture is due to Seidel (1991). 21