Optimization methods NOPT048

Size: px
Start display at page:

Download "Optimization methods NOPT048"

Transcription

1 Optimization methods NOPT048 Jirka Fink fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague Summer semester 2017/18 Last change on May 22, 2018 License: Creative Commons BY-NC-SA 4.0 Jirka Fink Optimization methods 1

2 Jirka Fink: Optimization methods Plan of the lecture Linear and integer optimization Convex sets and Minkowski-Weyl theorem Simplex methods Duality of linear programming Ellipsoid method Unimodularity Minimal weight maximal matching Matroid Cut and bound method Jirka Fink Optimization methods 2

3 Jirka Fink: Optimization methods General information Homepage fink/ Consultations Individual schedule Examination Tutorial conditions Tests Theoretical homeworks Practical homeworks Pass the exam Jirka Fink Optimization methods 3

4 Jirka Fink: Optimization methods Literature A. Schrijver: Theory of linear and integer programming, John Wiley, 1986 W. J.Cook, W. H. Cunningham, W. R. Pulleyblank, A. Schrijver: Combinatorial Optimization, John Wiley, 1997 J. Matoušek, B. Gärtner: Understanding and using linear programming, Springer, J. Matoušek: Introduction to Discrete Geometry. ITI Series , MFF UK, 2003 Jirka Fink Optimization methods 4

5 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Convex polyhedron 4 Simplex method 5 Duality of linear programming 6 Ellipsoid method 7 Matching Jirka Fink Optimization methods 5

6 Example of linear programming: Optimized diet Express using linear programming the following problem Find the cheapest vegetable salad from carrots, white cabbage and cucumbers containing required amount the vitamins A and C and dietary fiber. Food Carrot White cabbage Cucumber Required per meal Vitamin A [mg/kg] mg Vitamin C [mg/kg] mg Dietary fiber [g/kg] g Price [EUR/kg] Formulation using linear programming Carrot White cabbage Cucumber Minimize 0.75x x x 3 Cost subject to 35x x x Vitamin A 60x x x 3 15 Vitamin C 30x x x 3 4 Dietary fiber x 1, x 2, x 3 0 Jirka Fink Optimization methods 6

7 Matrix notation of the linear programming problem Formulation using linear programming Minimize 0.75x x x 3 subject to 35x x x x x x x x x 3 4 x 1, x 2, x 3 0 Matrix notation Minimize Subject to and x 1, x 2, x T x 1 x 2 x x x x 3 4 Jirka Fink Optimization methods 7

8 Notation: Vector and matrix Scalar A scalar is a real number. Scalars are written as a, b, c, etc. Vector A vector is an n-tuple of real numbers. Vectors are written as c, x, y, etc. Usually, vectors are column matrices of type n 1. Matrix A matrix of type m n is a rectangular array of m rows and n columns of real numbers. Matrices are written as A, B, C, etc. Special vectors 0 and 1 are vectors of zeros and ones, respectively. Transpose The transpose of a matrix A is matrix A T created by reflecting A over its main diagonal. The transpose of a column vector x is the row vector x T. Jirka Fink Optimization methods 8

9 Notation: Matrix product Elements of a vector and a matrix The i-th element of a vector x is denoted by x i. The (i, j)-th element of a matrix A is denoted by A i,j. The i-th row of a matrix A is denoted by A i,. The j-th column of a matrix A is denoted by A,j. Dot product of vectors The dot product (also called inner product or scalar product) of vectors x, y R n is the scalar x T y = n i=1 x i y i. Product of a matrix and a vector The product Ax of a matrix A R m n of type m n and a vector x R n is a vector y R m such that y i = A i, x for all i = 1,..., m. Product of two matrices The product AB of a matrix A R m n and a matrix B R n k a matrix C R m k such that C,j = AB,j for all j = 1,..., k. Jirka Fink Optimization methods 9

10 Notation: System of linear equations and inequalities Equality and inequality of two vectors For vectors x, y R n we denote x = y if x i = y i for every i = 1,..., n and x y if x i y i for every i = 1,..., n. System of linear equations Given a matrix A R m n of type m n and a vector b R m, the formula Ax = b means a system of m linear equations where x is a vector of n real variables. System of linear inequalities Given a matrix A R m n of type and a vector b R m, the formula Ax b means a system of m linear inequalities where x is a vector of n real variables. Example: System of linear inequalities in two different notations ( ) 2x 1 + x 2 + x ( ) x x 2x 1 + 5x 2 + 5x x 3 Jirka Fink Optimization methods 10

11 Optimization Mathematical optimization Mathematical optimization is the selection of a best element (with regard to some criteria) from some set of available alternatives. Examples Minimize x 2 + y 2 where (x, y) R 2 Maximal matching in a graph Minimal spanning tree Shortest path between given two vertices Optimization problem Given a set of solutions M and an objective function f : M R, optimization problem is finding a solution x M with the maximal (or minimal) objective value f (x) among all solutions of M. Duality between minimization and maximization If min x M f (x) exists, then also max x M f (x) exists and min x M f (x) = max x M f (x). Jirka Fink Optimization methods 11

12 Linear Programming Linear programming problem A linear program is the problem of maximizing (or minimizing) a given linear function over the set of all vectors that satisfy a given system of linear equations and inequalities. Equation form: min c T x subject to Ax = b, x 0 Canonical form: max c T x subject to Ax b, where c R n, b R m, A R m n a x R n. Conversion from the equation form to the canonical form max c T x subject to Ax b, Ax b, x 0 Conversion from the canonical form to the equation form min c T x + c T x subject to Ax Ax + Ix = b, x, x, x 0 Jirka Fink Optimization methods 12

13 Terminology Basic terminology Number of variables: n Number of constrains: m Solution: an arbritrary vector x of R n Objective function: a function to be minimized or maximized, e.g. max c T x Feasible solution: a solution satisfying all constrains, e.g. Ax b Optimal solution: a feasible solution maximizing c T x Infeasible problem: a problem having no feasible solution Unbounded problem: a problem having a feasible solution with arbitrary large value of given objective function Polyhedron: a set of points x R n satisfying Ax b for some A R m n and b R m Polytope: a bounded polyhedron Jirka Fink Optimization methods 13

14 Example of linear programming: Network flow Network flow problem Given a direct graph (V, E) with capacities c R E and a source s V and a sink t V, find the maximal flow from s to t satisfying the flow conservation and capacity constrains. Formulation using linear programming Variables: Flow x e for every edge e E Capacity constrains: 0 x c Flow conservation: uv E x uv = vw E x vw for every v V \ {s, t} Objective function: Maximize sw E x sw us E x us Matrix notation Add an auxiliary edge x ts with a sufficiently large capacity c ts Objective function: max x ts Flow conservation: Ax = 0 where A is the incidence matrix Capacity constrains: x c and x 0 Jirka Fink Optimization methods 14

15 Graphical method: Set of feasible solutions Example Draw the set of all feasible solutions (x 1, x 2 ) satisfying the following conditions. x 1 + 6x x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 Solution x 1 0 x 2 x 1 1 x 1 + 6x 2 15 x 2 0 (0, 0) 4x 1 x 2 10 Jirka Fink Optimization methods 15

16 Graphical method: Optimal solution Example Find the optimal solution of the following problem. Solution Maximize x 1 + x 2 x 1 + 6x x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 c T x = 2 c T x = 5 c T x = 0 c T x = 1 (3, 2) (1, 1) (0, 0) Jirka Fink Optimization methods 16

17 Graphical method: Multiple optimal solutions Example Find all optimal solutions of the following problem. Maximize x 2 x 1 + 6x x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 Solution c T x = 10 3 ( 1 6, 1) (0, 0) Jirka Fink Optimization methods 17

18 Graphical method: Unbounded problem Example Show that the following problem is unbounded. Maximize x 1 + x 2 x 1 + x 2 1 x 1, x 2 0 Solution (1, 1) (0, 0) Jirka Fink Optimization methods 18

19 Graphical method: Infeasible problem Example Show that the following problem has no feasible solution. Maximize x 1 + x 2 x 1 + x 2 2 x 1, x 2 0 Solution x 1 0 x 2 0 (0, 0) x 1 + x 2 2 Jirka Fink Optimization methods 19

20 Related problems Integer linear programming Integer linear programming problem is an optimization problem to find x Z n which maximizes c T x and satisfies Ax b where A R m n and b R m. Mix integer linear programming Some variables are integer and others are real. Binary linear programming Every variable is either 0 or 1. Complexity A linear programming problem is efficiently solvable, both in theory and in practice. The classical algorithm for linear programming is the Simplex method which is fast in practice but it is not known whether it always run in polynomial time. Polynomial time algorithms are ellipsoid and interior point methods. No strongly polynomial-time algorithms for linear programming is known. Integer linear programming is NP-hard. Jirka Fink Optimization methods 20

21 Example of integer linear programming: Vertex cover Vertex cover problem Given an undirected graph (V, E), find the smallest set of vertices U V covering every edge of E; that is, U e for every e E. Formulation using integer linear programming Variables: Cover x v {0, 1} for every vertex v V Covering: x u + x v 1 for every edge uv E Objective function: Minimize v V x v Matrix notation Variables: Cover x {0, 1} V (i.e. 0 x 1 and x Z V ) Covering: A T x 1 where A is the incidence matrix Objective function: Minimize 1 T x Jirka Fink Optimization methods 21

22 Relation between optimal integer and relaxed solution Non-empty polyhedron may not contain an integer solution Integer feasible solution may not be obtained by rounding of a relaxed solution Integral optimum c Relaxed optimum Jirka Fink Optimization methods 22

23 Example: Ice cream production planning Problem description An ice cream manufacturer needs to plan production of ice cream for next year The estimated demand of ice cream for month i {1,..., n} is d i (in tons) Price for storing ice cream is a per ton and month Changing the production by 1 ton from month i 1 to month i cost b Produced ice cream cannot be stored longer than one month The total cost has to be minimized Jirka Fink Optimization methods 23

24 Example: Ice cream production planning Solution 1 Variable x i determines the amount of produced ice cream in month i {0,..., n} 2 Variable s i determines the amount of stored ice cream from month i 1 month i 3 The stored quantity is computed by s i = s i 1 + x i d i for every i {1,..., n} 4 Durability is ensured by s i d i for all i {1,..., n} 5 Non-negativity of the production and the storage x, s 0 6 Objective function min b n i=1 x i x i 1 + a n i=1 s i is non-linear 7 We introduce variables y i for i {1,..., n} to avoid the absolute value 8 Linear programming problem formulation Minimize b n i=1 y i + a n i=1 s i subject to s i 1 s i + x i = d i for i {1,..., n} s i d i for i {1,..., n} x i x i 1 y i 0 for i {1,..., n} x i + x i 1 y i 0 for i {1,..., n} x, s, y 0 9 We can bound the initial and final amount of ice cream s 0 a s n 10 and also bound the production x 0 Jirka Fink Optimization methods 24

25 Finding shortest paths from a vertex s in an oriented graph Shortest path problem Given an oriented graph (V, E) with length of edges c Z n and a starting vertex s, find the length of a shortest path from s to all vertices. Linear programming problem Maximize u V x u subject to x v x u c uv for every edge uv x s = 0 Proof (optimal solution x u of LP gives the distance from s to u for u V ) 1 Let y u be the length of a shortest path from s to u 2 It holds that y x Let P be edges on the shortest path from s to z y z = uv P c uv uv P x v x u = x z y s = x z 3 It holds that y = x For the sake of contradiction assume that y x So y x and u V y u > u V x u But y is a feasible solution and x is an optimal solution Jirka Fink Optimization methods 25

26 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Convex polyhedron 4 Simplex method 5 Duality of linear programming 6 Ellipsoid method 7 Matching Jirka Fink Optimization methods 26

27 Linear space Definition: Linear (vector) space A set (V, +, ) is called a linear (vector) space over a field T if + : V V V i.e. V is closed under addition + : T V V i.e. V is closed under multiplication by T (V, +) is an Abelian group For every x V it holds that 1 x = x where 1 T For every a, b T and every x V it holds that (ab) x = a (b x) For every a, b T and every x V it holds that (a + b) x = a x + b x For every a T and every x, y V it holds that a (x + y) = a x + a y Observation If V is a linear space and L V, then L is a linear space if and only if 0 L, x + y L for every x, y L and αx L for every x L and α T. Jirka Fink Optimization methods 27

28 Linear and affine spaces in R n Observation A non-empty set V R n is a linear space if and only if αx + βy V for all α, β R, x, y V. Definition If V R n is a linear space and a R n is a vector, then V + a is called an affine space where V + a = {x + a; x V }. Basic observations If L R n is an affine space, then L + x is an affine space for every x R n. If L R n is an affine space, then L x is a linear space for every x L. 1 If L R n is an affine space, then L x = L y for every x, y L. 2 An affine space L R n is linear if and only if L contains the origin 0. 3 System of linear equations The set of all solutions of Ax = 0 is a linear space and every linear space is the set of all solutions of Ax = 0 for some A. 4 The set of all solutions of Ax = b is an affine space and every affine space is the set of all solutions of Ax = b for some A and b, assuming Ax = b is consistent. 5 Jirka Fink Optimization methods 28

29 1 By definition, L = V + a for some linear space V and some vector a R n. Observe that L x = V + (a x) and we prove that V + (a x) = V which implies that L x is a linear space. There exists y V such that x = y + a. Hence, a x = a y a = y V. Since V is closed under addition, it follows that V + (a x) V. Similarly, V (a x) V which implies that V V + (a x). Hence, V = V + (a x) and the statement follows. 2 We proved that L = V + a for some linear space V R n and some vector a R n and L x = V + (a x) = V for every x L. So, L x = V = L y. 3 Every linear space must contain the origin by definition. For the opposite implication, we set x = 0 and apply the previous statement. 4 If V is a linear space, then we can obtain rows of A from the basis of the orthogonal space of V. 5 If L is an affine space, then L = V + a for some vector space V and some vector a and there exists a matrix A such that V = {x; Ax = 0}. Hence, V + a = {x + a; Ax = 0} = {y; Ay Aa = 0} = {y; Ay = b} where we substitute x + a = y and set b = Aa. If L = {x; Ax = b} is non-empty, then let y be an arbitrary vertex of L. Furthermore, L y = {x y; Ax = b} = {z; Ay + Az = b} = {z; Az = 0} is a linear space since Ay = b. Jirka Fink Optimization methods 28

30 Convex set Observation (Exercise) A set S R n is an affine space if and only if S contains whole line given every two points of S. Definition A set S R n is convex if S contains whole segment between every two points of S. Example a u b v Jirka Fink Optimization methods 29

31 Linear, affine and convex hulls Observation The intersection of linear spaces is also a linear space. 1 The non-empty intersection of affine spaces is an affine space. 2 The intersection of convex sets is also a convex set. 3 Definition Let S R n be an non-empty set. The linear hull span(s) of S is the intersection of all linear sets containing S. The affine hull aff(s) of S is the intersection of all affine sets containing S. The convex hull conv(s) of S is the intersection of all convex sets containing S. Observation Let S R n be an non-empty set. A set S is linear if and only if S = span(s). 4 A set S is affine if and only if S = aff(s). 5 A set S is convex if and only if S = conv(s). 6 span(s) = aff(s {0}) Jirka Fink Optimization methods 30

32 1 Use definition and logic. 2 Let L i be affine space for i in an index set I and L = i I L i and a L. We proved that L a = i I (L i a) is a linear space which implies that L is an affine space. 3 Use definition and logic. 4 Similar as the convex version. 5 Similar as the convex version. 6 We proved that conv(s) is convex, so if S = conv(s), then S is convex. In order to prove that S = conv(s) if S is convex, we observe that conv(s) S since conv(s) = M S,M convex M and S is included in this intersection. Similarly, conv(s) S since every M in the intersection contains S. Jirka Fink Optimization methods 30

33 Linear, affine and convex combinations Definition Let v 1,..., v k be vectors of R n where k is a positive integer. The sum k i=1 α iv i is called a linear combination if α 1,..., α k R. The sum k i=1 α iv i is called an affine combination if α 1,..., α k R, k i=1 α i = 1. The sum k i=1 α iv i is called a convex combination if α 1,..., α k 0 and k i=1 α i = 1. Lemma Let S R n be a non-empty set. The set of all linear combinations of S is a linear space. 1 The set of all affine combinations of S is an affine space. 2 The set of all convex combinations of S is a convex set. 3 Lemma A linear space S contains all linear combinations of S. 4 An affine space S contains all affine combinations of S. 5 A convex set S contains all convex combinations of S. 6 Jirka Fink Optimization methods 31

34 1 We have to verify that the set of all linear combinations has closure under addition and multiplication by scalars. In order to verify the closure under multiplication, let k i=1 α iv i be a linear combination of S and c R be a scalar. Then, c k i=1 α iv i = k i=1 (cα i)v i is a linear combination of of S. Similarly, the set of all linear combinations has closure under addition and it contains the origin. 2 Similar as the convex version: Show that S contains whole line defined by arbitrary pair of points of S. 3 Let k i=1 α iu i and l j=1 β jv j be two convex combinations of S. In order to prove that the set of all convex combinations of S contains the line segment between k i=1 α iu i and l j=1 β jv j, let us consider γ 1, γ 2 0 such that γ 1 + γ 2 = 1. Then, γ k 1 i=1 α iu i + γ l 2 j=1 β jv j = k i=1 (γ 1α i )u i + l j=1 (γ 2β j )v j is a convex combination of S since (γ 1 α i ), (γ 2 β j ) 0 and k i=1 (γ 1α i ) + l j=1 (γ 2β j ) = 1. 4 Similar as the convex version. 5 Let k i=1 α iv i be an affine combination of S. Since S v k is a linear space, the linear combination k i=1 α i(v i v k ) of S v k belongs into S v k. Hence, v k + k i=1 α i(v i v k ) = k i=1 α iv i belongs to S. 6 We prove by induction on k that S contains every convex combination k i=1 α iv i of S. The statement holds for k 2 by the definition of a convex set. Let k i=1 α iv i be a convex combination of k vectors of S and we assume that α k < 1, otherwise α 1 = = α k 1 = 0 so k i=1 α iv i = v k S. Hence, k i=1 α iv i = (1 α k ) k i=1 α i 1 α k v i + α k v k = (1 α k )y + α k v k where we observe Jirka Fink Optimization methods 31

35 that y := k i=1 α i 1 α k v i is a convex combination of k 1 vectors of S which by induction belongs to S. Furthermore, (1 α k )y + α k v k is a convex combination of S which by induction also belongs to S. Jirka Fink Optimization methods 31

36 Linear, affine and convex combinations Theorem Let S R n be a non-empty set. The linear hull of a set S is the set of all linear combinations of S. 1 The affine hull of a set S is the set of all affine combinations of S. 2 The convex hull of a set S is the set of all convex combinations of S. 3 Jirka Fink Optimization methods 32

37 1 Similar as the convex version. 2 Similar as the convex version. 3 Let T be the set of all convex combinations of S. First, we prove that conv(s) T. The definition states that conv(s) = M S,M convex M and we proved that T is a convex set containing S, so T is included in this intersection which implies that conv(s) is a subset of T. In order to prove conv(s) T, we again consider the intersection conv(s) = M S,M convex M. We proved that a convex set M contains all convex combinations of M which implies that if M S then M also contains all convex combinations of S. So, in this intersection every M contains T which implies that conv(s) T. Jirka Fink Optimization methods 32

38 Independence and base Definition A set of vectors S R n is linearly independent if no vector of S is a linear combination of other vectors of S. A set of vectors S R n is affinely independent if no vector of S is an affine combination of other vectors of S. Observation (Exercise) Vectors v 1,..., v k R n are linearly dependent if and only if there exists a non-trivial combination α 1,..., α k R such that k i=1 α iv i = 0. Vectors v 1,..., v k R n are affinely dependent if and only if there exists a non-trivial combination α 1,..., α k R such that k i=1 α iv i = 0 a k i=1 α i = 0. Observation Vectors v 0,..., v k R n are affinely independent if and only if vectors v 1 v 0,..., v k v 0 are linearly independent. 1 Vectors v 1,..., v k R n are linearly independent if and only if vectors 0, v 1,..., v k are affinely independent. 2 Jirka Fink Optimization methods 33

39 1 If vectors v 1 v 0,..., v k v 0 are linearly dependent, then there exists a non-trivial combination α 1,..., α k R such that k i=1 α i(v i v 0 ) = 0. In this case, 0 = k i=1 α i(v i v 0 ) = k i=1 α iv i v k 0 i=1 α i = k i=0 α iv i is a non-trivial affine combination with k i=0 α i = 0 where α 0 = k i=1 α i. If v 0,..., v k R n are affinely dependent, then there exists a non-trivial combination α 0,..., α k R such that k i=0 α iv i = 0 a k i=0 α i = 0. In this case, 0 = k i=0 α iv i = α 0 v 0 + k i=1 α iv i = k i=1 α i(v i v 0 ) is a non-trivial linear combination of vectors v 1 v 0,..., v k v 0. 2 Use the previous observation with v 0 = 0. Jirka Fink Optimization methods 33

40 Basis Definition Let B R n and S R n. B is a base of a linear space S if B are linearly independent and span(b) = S. B is an base of an affine space S if B are affinely independent and aff(b) = S. Observation All linear bases of a linear space have the same cardinality. All affine bases of an affine space have the same cardinality. 1 Observation Let S be a linear space and B S \ {0}. Then, B is a linear base of S if and only if B {0} is an affine base of S. Definition The dimension of a linear space is the cardinality of its linear base. The dimension of an affine space is the cardinality of its affine base minus one. The dimension dim(s) of a set S R n is the dimension of affine hull of S. Jirka Fink Optimization methods 34

41 1 For the sake of contradiction, let a 1,..., a k and b 1,..., b l be two basis of an affine space L = V + x where V a linear space and l > k. Then, a 1 x,..., a k x and b 1 x,..., b l x are two linearly independent sets of vectors of V. Hence, there exists i such that a 1 x,..., a k x, b i x are linearly independent, so a 1,..., a k, b i are affinely independent. Therefore, b i cannot be obtained by an affine combination of a 1,..., a k and b i / aff(a 1,..., a k ) which contradicts the assumption that a 1,..., a k is a basis of L. Jirka Fink Optimization methods 34

42 Carathéodory Theorem (Carathéodory) Let S R n. Every point of conv(s) is a convex combinations of affinely independent points of S. 1 Corollary Let S R n be a set of dimension d. Then, every point of conv(s) is a convex combinations of at most d + 1 points of S. Jirka Fink Optimization methods 35

43 1 Let x conv(s). Let x = k i=1 α ix i be a convex combination of points of S with the smallest k. If x 1,..., x k are affinely dependent, then there exists a combination 0 = β i x i such that β i = 0 and β 0. Since this combination is non-trivial, there exists j such that β j > 0 and α j β j x = i j γ i x i i j γ i = 1 γ i 0 for all i j which contradicts the minimality of k. is minimal. Let γ i = α i α j β i. Observe that β j Jirka Fink Optimization methods 35

44 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Convex polyhedron 4 Simplex method 5 Duality of linear programming 6 Ellipsoid method 7 Matching Jirka Fink Optimization methods 36

45 System of linear equations and inequalities Definition A hyperplane is a set { x R n ; a T x = b } where a R n \ {0} and b R. A half-space is a set { x R n ; a T x b } where a R n \ {0} and b R. A polyhedron is an intersection of finitely many half-spaces. A polytope is a bounded polyhedron. Observation For every a R n and b R, the set of all x R n satisfying a T x b is convex. Corollary Every polyhedron {x; Ax b} is convex. Jirka Fink Optimization methods 37

46 Mathematical analysis Definition A set S R n is closed if S contains the limit of every converging sequence of points of S. A set S R n is bounded if there exists b R s.t. for every x S holds x < b. A set S R n is compact if every sequence of points of S contains a converging subsequence with limit in S. Theorem A set S R n is compact if and only if S is closed and bounded. Theorem If f : S R is a continuous function on a compact set S R n, then S contains a point x maximizing f over S; that is, f (x) f (y) for every y S. Definition Infimum of a set S R is inf(s) = max {b R; b x x S}. Supremum of a set S R is sup(s) = min {b R; b x x S}. inf( ) = and sup( ) = inf(s) = if S has no lower bound Jirka Fink Optimization methods 38

47 Hyperplane separation theorem Theorem (strict version) Let C, D R n be non-empty, closed, convex and disjoint sets and C be bounded. Then, there exists a hyperplane a T x = b which strictly separates C and D; that is C { x; a T x < b } and D { x; a T x > b }. Example a T x > b D C a T x < b Jirka Fink Optimization methods 39

48 Hyperplane separation theorem Theorem (strict version) Let C, D R n be non-empty, closed, convex and disjoint sets and C be bounded. Then, there exists a hyperplane a T x = b which strictly separates C and D; that is C { x; a T x < b } and D { x; a T x > b }. Proof (overview) 1 Find c C and d D with minimal distance d c. 1 Let m = inf { d c ; c C, d D}. 2 3 For every n N there exists c n C and d n D such that d n c n m + 1 n. Since C is compact, there exists a subsequence { } c kn converging to c C. n=1 4 There exists z R such that for every n N the distance d n c is at most z. 1 5 Since the set D {x R n ; x c z} is compact, the sequence { d kn } n=1 has a subsequence { } d ln converging to d D. n=1 6 Observe that the distance d c is m. 2 2 The required hyperplane is a T x = b where a = d c and b = at c+a T d 2 1 We prove that a T c a T c < b < a T d a T d for every c C and d D. 3 2 Since C is convex, y = c + α(c c) C for every 0 α 1. 3 From the minimality of the distance d c it follows that d y 2 d c 2. 4 Using elementary operations observe that α 2 c c 2 + a T c a T c 4 5 which holds for arbitrarily small α > 0, it follows that a T c a T c holds. Jirka Fink Optimization methods 40

49 1 d n c d n c n + c n c m max { c c ; c, c C} = z 2 d c d d ln + d ln c ln + c ln c m 3 The inner two inequalities are obvious. We only prove the first inequality since the last one is analogous. 4 d y 2 d c 2 (d c α(c c)) T (d c α(c c)) (d c) T (d c) α 2 (c c) T (c c) 2α(d c) T (c c) 0 α 2 c c 2 + a T c a T c Jirka Fink Optimization methods 40

50 Faces of a polyhedron Definition Let P be a polyhedron. A half-space α T x β is called a supporting hyperplane of P if the inequality α T x β holds for every x P and the hyperplane α T x = β has a non-empty intersection with P. The set of point in the intersetion P { x; α T x = β } is called a face of P. By convention, the empty set and P are also faces, and the other faces are proper faces. 1 Definition Let P be a d-dimensional polyhedron. A 0-dimensional face of P is called a vertex of P. A 1-dimensional face is of P called an edge of P. A (d 1)-dimensional face of P is called an facet of P. Jirka Fink Optimization methods 41

51 1 Observe, that every face of a polyhedron is also a polyhedron. Jirka Fink Optimization methods 41

52 Minimal defining system of a polyhedron Definition P = { x R n ; A x = b, A x b } is a minimal defining system of a polyherdon P if no condition can be removed and no inequality can be replaced by equality without changing the polyhedron P. Observation Every polyhedron has a minimal defining system. Lemma Let P = { x R n ; A x = b, A x b } be a minimal defining system of a polyherdon P. Let P = { x P; A i, x = b } i for some row i of A x b. Then dim(p ) < dim(p). 1 Corollary Let P = {x; Ax b} of dimension d. Then for every row i, either P {x; A i, x = b i } = P or P {x; A i, x = b i } = or P {x; A i, x = b i } is a proper face of dimension at most d 1. Jirka Fink Optimization methods 42

53 1 There exists x P \ P. Since aff(p ) { x; A i, x = b } i, it follows that x / aff(p ). Hence, dim(p ) + 1 = dim(p {x}) dim(p). Jirka Fink Optimization methods 42

54 A point inside a polyhedron Theorem Let P be a non-empty polyhedron defined by a minimal system { x R n ; A x = b, A x b }. Then, 1 there exists a point z P such that A z < b and 2 dim(p) = n rank(a ), and 3 and z does not belong in any proper face of P. Proof 1 There exists a point z P such that A z < b. 1 For every row i of A x b there exists z i P such that A i, zi < b i. 2 Let z = 1 m m i=1 zi be the center of gravity. 3 Since z is a convex combination of points of P, point z belongs to P and A z < b. 2 dim(p) = n rank(a ) 1 Let L be the affine space defined by A x = b. 2 There exists ɛ > 0 such that P contains whole ball B = {x L; x z ɛ}. 3 Vectors of a base of the linear space L z can be scaled so that they belong into B z. 4 dim(l) dim(p) dim(b) dim(l) = n rank(a ). 3 The point z does not belong in any proper face of p. 1 The point z cannot belong into any proper face of P because a supporting hyperplane of such a face split the ball B. Jirka Fink Optimization methods 43

55 A bijection between faces and inequalities Theorem Let P = { x R n ; A x = b, A x b } be a minimal defining system of a polyhedron P. Then, there exists a bijection between facets of P and inequalities A x b. Proof 1 Let R i = { x; A i, x = b i } and Fi = P R i. 2 From minimality if follows that R i is a supporting hyperplane, and therefore, F i is a face. 3 There exists a point y i F i satisfying A j, y i < b j for all j i. 1 4 So dim(f i ) = dim(p) 1 and F i is a facet. 5 Furthermore, y i / F j for all j i, so F i F j for j i. 6 For contradiction, let F be an another facet. 7 There exists a facet i such F F i. 2 8 F is a proper face of F i and so its dimension is at most dim(p) 2 contradicting the assumption that F is a proper facet. Jirka Fink Optimization methods 44

56 1 From minimality it follows that there exists x satifying all conditions of P except A i, x < b i. Let z be a point from the previous theorem. A point y i can be obtained as a convex combination of x and z. 2 Otherwise 1 m m i=1 y i satisfies strictly all condition contradicting the assumption that F is a proper facet. Jirka Fink Optimization methods 44

57 Fully dimensional polyhedron Definition A polyhedron P R n is of full-dimension if dim(p) = n. Corollary If P is a full-dimensional polyhedron, then P has exactly one minimal defining system up-to multiplying conditions by constants. 1 Jirka Fink Optimization methods 45

58 1 Affine space of dimension n 1 is determined by a unique condition. Jirka Fink Optimization methods 45

59 Minkowski-Weyl Theorem (Minkowski-Weyl) A set S R n is a polytope if and only if there exists a finite set V R n such that S = conv(v ). Illustration A 4, x b 4 v 3 v 4 A 3, x b 3 A 5, x b 5 v 5 {x; Ax b} = conv({v 1,..., v 5 }) v 2 A 2, x b 2 A 1, x b 1 v 1 Jirka Fink Optimization methods 46

60 Minkowski-Weyl Theorem (Minkowski-Weyl) A set S R n is a polytope if and only if there exists a finite set V R n such that S = conv(v ). Proof of the implication (main steps) by induction on dim(s) For dim(s) = 0 the size of S is 1 and the statement holds. Assume that dim(s) > 0. 1 Let S = { x R n ; A x = b, A x b } be a minimal defining system. 2 Let S i = { x S; A i, x = b } i where i is a row of A x b. 3 Since dim(s i ) < dim(s), there exists a finite set V i R n such that S i = conv(v i ). 4 Let V = i V i. We prove that conv(v ) = S. Follows from V i S i S and convexity of S. Let x S. Let L be a line containing x. S L is a line segment with end-vertices u and v. There exists i, j I such that A i, u = b i and A j, v = b j. Since u S i and v S j, points u and v are convex combinations of V i and V j, resp. Since x is a also a convex combination of u and v, we have x conv(v ). Jirka Fink Optimization methods 47

61 Minkowski-Weyl Theorem (Minkowski-Weyl) A set S R n is a polytope if and only if there exists a finite set V R n such that S = conv(v ). Lemma A condition α T v β is satisfied by all points v V if and only if the condition is satisfied by all points v conv(v ). Proof of the implication (main steps) { (α } 1 Let Q = β) ; α R n, β R, 1 α 1, 1 β 1, α T v β v V. 1 2 Since Q is a polytope, there exists a finite set W R n+1 s.t. Q = conv(w ). 2 { 3 Let Y = x R n ; α T x β ( ) } α β W and we prove that conv(v ) = Y. From V Y it follows that conv(v ) Y. 3 We prove that x / conv(v ) x / Y. There exists α R n, β R s.t. α T x > β and v V : α T v β 4 Assume that 1 α 1, 1 β 1. 5 Observe that ( α β) Q and x fails at least one condition of Q. Hence, x fails at least one condition of W. 6 Jirka Fink Optimization methods 48

62 1 Observe that α T v β means the same as ( ) v T ( α 1 β) 0. Therefore, Q is described by V + 2n + 2 inequalities. Furthermore, conditions 1 α 1 and 1 β 1 implies that Q is bounded. 2 Here we use the implication of Minkovski-Weyl theorem which we already proved. 3 Every point of V satifies all conditions of Q since Q contains only conditions satisfied by all points of V. Since W conv(w ) = Q, it follows that every point of V satifies all conditions of W. Hence, V Y. Since Y is convex, the inclusion conv(v ) Y. 4 Apply Hyperplane separation theorem on sets Q and {x}. 5 Scale the vector ( α β) so that it fit into this box. 6 Use lemma. Jirka Fink Optimization methods 48

63 Faces Theorem Let P be a polyhedron and V its vertices. Then, x is a vertex of P if and only if x / conv(p \ {x}). Furthermore, if P is bounded, then P = conv(v ). Proof (only for bounded polyhedrons) Let V 0 be (inclusion) minimal set such that P = conv(v 0 ). Let V e = {x P; x / conv(p \ {x})}. We prove that V = V e = V 0. 1 Jirka Fink Optimization methods 49

64 V1 V e: Let z V be a vertex. By definition, there exists a supporting hyperplane c T x = t such that P { x; c T x = t } = {z}. Since c T x < t for all x P \ {z}, it follows that x V e. V e V 0 : Let z V e. Since conv(p \ {z}) P, it follows that z V 0. V 0 V : Let z V 0 and D = conv(v 0 \ {z}). From Minkovski-Weyl s theorem it follows that V 0 is finite and therefore, D is compact. By the separation theorem, there exists a hyperplane c T x = r separating {z} and D, that is c T x < r < c T z for all x D. Let t = c T z. Hence, A = { x; c T x = t } is a supporting hyperplane of P. We prove that A P = {z}. For contradiction, let z P A be a different from z. Then, there exists a convex combination z = α 1 x α k x k + α 0 z of V 0. From z z it follows that α 0 < 1 and α i > 0 for some i. Since α 0 c T z = t and α i c T x i < t and α j c T x j t, it holds that c T z < t which contradicts the assumption that z A. Jirka Fink Optimization methods 49

65 Faces Theorem (A face of a face is a face) Let F be a face of a polyhedron P and let E F. Then, E is a face of F if and only if E is a face of P. Observation (Exercise) The intersection of two faces of a polyhedron P is a face of P. Observation (Exercise) A non-empty set F R n is a face of a polyhedron P = {x R n ; Ax b} if and only if F is the set of all optimal solutions of a linear programming problem min { c T x; Ax b } for some vector c R n. Jirka Fink Optimization methods 50

66 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Convex polyhedron 4 Simplex method 5 Duality of linear programming 6 Ellipsoid method 7 Matching Jirka Fink Optimization methods 51

67 Notation Notation used in the Simplex method Linear programming problem in the equation form is a problem to find x R n which maximizes c T x and satisfies Ax = b and x 0 where A R m n and b R m. We assume that rows of A are linearly independent. For a subset B {1,..., n}, let A B be the matrix consisting of columns of A whose indices belong to B. Similarly for vectors, x B denotes the coordinates of x whose indices belong to B. The set N = {1,..., n} \ B denotes the remaining columns. Example Consider B = {2, 4}. Then, N = {1, 3, 5} and ( ) ( ) A = A B = 4 9 A N = ( 1 5 ) Note that Ax = A B x B + A N x N. x T = (3, 4, 6, 2, 7) x T B = (4, 2) x T N = (3, 6, 7) Jirka Fink Optimization methods 52

68 Basic feasible solutions Definitions Consider the equation form Ax = b and x 0 with n variables and rank(a) = m rows. A set of columns B is a basis if A B is a regular matrix. The basic solution x corresponding to a basis B is x N = 0 and x B = A 1 B b. A basic solution satisfying x 0 is called basic feasible solution. x B are called basic variables and x N are called non-basic variables. 1 Lemma A feasible solution x is basic if and only if the columns of the matrix A K are linearly independent where K = {j {1,..., n} ; x j > 0}. Observation Basic feasible solutions are exactly vertices of the polyhedron P = {x; Ax = b, x 0}. 2 3 Jirka Fink Optimization methods 53

69 1 Remember that non-basic variables are always equal to zero. 2 If x is a basic feasible solution and B is the corresponding basis, then x N = 0 and so K B which implies that columns of A K are also linearly independent. If columns of A K are linearly independent, then we can extend K into B by adding columns of A so that columns of A B are linearly independent which implies that B is a basis of x. 3 Note that basic variables can also be zero. In this case, the basis B corresponding to a basic solution x may not be unique since there may be many ways to extend K into a basis B. This is called degeneracy. Jirka Fink Optimization methods 53

70 Example: Initial simplex tableau Canonical form Equation form Simplex tableau Maximize x 1 + x 2 x 1 + x 2 1 x 1 3 x 2 2 x 1, x 2 0 Maximize x 1 + x 2 x 1 + x 2 + x 3 = 1 x 1 + x 4 = 3 x 2 + x 5 = 2 x 1, x 2, x 3, x 4, x 5 0 x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 Jirka Fink Optimization methods 54

71 Example: Initial simplex tableau Simplex tableau Initial basic feasible solution Pivot B = {3, 4, 5}, N = {1, 2} x = (0, 0, 1, 3, 2) Two edges from the vertex (0, 0, 1, 3, 2): x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 1 (t, 0, 1 + t, 3 t, 2) when x 1 is increased by t 2 (0, r, 1 r, 3, 2 r) when x 2 is increased by r These edges give feasible solutions for: 1 t 3 since x 3 = 1 + t 0 and x 4 = 3 t 0 and x 5 = r 1 since x 3 = 1 r 0 and x 4 = 3 0 and x 5 = 2 r 0 In both cases, the objective function is increasing. We choose x 2 as a pivot. Jirka Fink Optimization methods 55

72 Example: Pivot step Simplex tableau Basis Original basis B = {3, 4, 5} x 2 enters the basis (by our choice). x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 (0, r, 1 r, 3, 2 r) is feasible for r 1 since x 3 = 1 r 0. Therefore, x 3 leaves the basis. New basis B = {2, 4, 5} New simplex tableau x 2 = 1 + x 1 x 3 x 4 = 3 x 1 x 5 = 1 x 1 + x 3 z = 1 + 2x 1 x 3 Jirka Fink Optimization methods 56

73 Example: Next step Simplex tableau Next pivot x 2 = 1 + x 1 x 3 x 4 = 3 x 1 x 5 = 1 x 1 + x 3 z = 1 + 2x 1 x 3 Basis B = {2, 4, 5} with a basic feasible solution (0, 1, 0, 3, 1). This vertex has two incident edges but only one increases the objective function. The edge increasing objective function is (t, 1 + t, 0, 3 t, 1 t). Feasible solutions for x 2 = 1 + t 0 and x 4 = 3 t 0 and x 5 = 1 t 0. Therefore, x 1 enters the basis and x 5 leaves the basis. New simplex tableau x 1 = 1 + x 3 x 5 x 2 = 2 x 5 x 4 = 2 x 3 + x 5 z = 3 + x 3 2x 5 Jirka Fink Optimization methods 57

74 Example: Last step Simplex tableau Next pivot x 1 = 1 + x 3 x 5 x 2 = 2 x 5 x 4 = 2 x 3 + x 5 z = 3 + x 3 2x 5 Basis B = {1, 2, 4} with a basic feasible solution (1, 2, 0, 2, 0). This vertex has two incident edges but only one increases the objective function. The edge increasing objective function is (1 + t, 2, t, 2 t, 0). Feasible solutions for x 1 = 1 + t 0 and x 2 = 2 0 and x 4 = 2 t 0. Therefore, x 3 enters the basis and x 4 leaves the basis. New simplex tableau x 1 = 3 x 4 x 2 = 2 x 5 x 3 = 2 x 4 + x 5 z = 5 x 4 x 5 Jirka Fink Optimization methods 58

75 Example: Optimal solution Simplex tableau x 1 = 3 x 4 x 2 = 2 x 5 x 3 = 2 x 4 + x 5 z = 5 x 4 x 5 No other pivot Basis B = {1, 2, 3} with a basic feasible solution (3, 2, 2, 0, 0). This vertex has two incident edges but no one increases the objective function. We have an optimal solution. Why this is an optimal solution? Consider an arbitrary feasible solution ỹy. The value of objective function is z = 5 ỹy 4 ỹy 5. Since ỹy 4, ỹy 5 0, the objective value is z = 5 ỹy 4 ỹy 5 5 = z. Jirka Fink Optimization methods 59

76 Simplex tableau in general Definition A simplex tableau determined by a feasible basis B is a system of m + 1 linear equations in variables x 1,..., x n, and z that has the same set of solutions as the system Ax = b, z = c T x, and in matrix notation looks as follows: x B = p + Qx N z = z 0 + r T x N where x B is the vector of the basic variables, x N is the vector on non-basic variables, p R m, r R n m, Q is an m (n m) matrix, and z 0 R. Observation For each basis B there exists exactly one simplex tableau, and it is given by Q = A 1 B A N p = A 1 B b 1 z 0 = c T BA 1 B b r = c N (c T BA 1 B A N) T 2 Jirka Fink Optimization methods 60

77 1 Since a matrix A B is regular, we can multiply an equation A B x B + A N x N = b by A 1 B to obtain x B = A 1 B b A 1 B A Nx N, so Q = A 1 B A N and p = A 1 B b. 2 The objective function is c T Bx B + c T Nx N = c T B(A 1 B b A 1 B A Nx N ) + c T Nx N = c T BA 1 B b + (ct N c T BA 1 B A N)x N, so z 0 = c T BA 1 B b and r = c N (c T BA 1 B A N) T. Jirka Fink Optimization methods 60

78 Properties of a simplex tableau Simplex tableau in general x B = p + Qx N z = z 0 + r T x N Observation Basis B is feasible if and only if p 0. Observation The solution corresponding to a basis B is optimal if r 0. 1 Observation If a linear programming problem in the equation form is feasible and bounded, then it has an optimal basic solution. Jirka Fink Optimization methods 61

79 1 The oposite implication may not hold for a degenerated optimal basis. Jirka Fink Optimization methods 61

80 Pivot step Simplex tableau in general x B = p + Qx N z = z 0 + r T x N Find a pivot If r 0, then we have an optimal solution. Otherwise, choose an arbitrary entering variable x v such that r v > 0. If Q,v 0, then the corresponding edge is unbounded and the problem is also unbounded. 1 Otherwise, find a leaving variable x u which limits the increment of the entering variable most strictly, i.e. Q u,v < 0 and p u Q u,v is minimal. Update the simplex tableau Gaussian elimination: Express x v from the row x u = p u + Q u, x N and substitute x v using the obtained formula. Jirka Fink Optimization methods 62

81 1 Consider the following edge: x v = t, remaining nonbasic variables are 0, and x B = p + Q,vt. All solutions on this edge are feasible for t 0 since x 0. For the objective value, c T x = z 0 + r T x N = z 0 + r vt as t, so the objective function is unbounded. Jirka Fink Optimization methods 62

82 Pivot rules Pivot rules Largest coefficient Choose an improving variable with the largest coefficient. Largest increase Choose an improving variable that leads to the largest absolute improvement in z. Steepest edge Choose an improving variable whose entering into the basis moves the current basic feasible solution in a direction closest to the direction of the vector c, i.e. c T (x new x old ) x new x old Bland s rule Choose an improving variable with the smallest index, and if there are several possibilities of the leaving variable, also take the one with the smallest index. Random edge Select the entering variable uniformly at random among all improving variables. Jirka Fink Optimization methods 63

83 Initial feasible basis Equation form Maximize c T x such that Ax = b and x 0. Auxiliary linear program Multiply every row j with b j < 0 by 1. 1 Introduce new variables y R m and solve an auxiliary linear program: Maximize 1 T y such that Ax + Iy = b and x 0, y 0. An initial basis contains variables y and an initial tableau is y = b + Ax z = 1 T b + (1 T A)x Whenever a variable of y become nonbasic, it can be removed from a tableau. When all variables of y are removed, express the original objective function c T x using nonbasic variables and solve the problem. Observation The original linear program has a feasible solution if and only if an optimal solution of the auxiliary linear program satisfies y = 0. Jirka Fink Optimization methods 64

84 1 Now, assume that b 0. Jirka Fink Optimization methods 64

85 Complexity Degeneracy Different bases may correspond to the same solution. 1 The simplex method may loop forever between these bases. Bland s or lexicographic rules prevent visiting the same basis twice. The number of visited vertices The total number of vertices is finite since the number of bases is finite. The objective value of visited vertices is increasing, so every vertex is visited at most once. 2 The number of visited vertices may be exponential, e.g. the Klee-Minty cube. 3 Practical linear programming problems in equation forms with m equations typically need between 2m and 3m pivot steps to solve. Open problem Is there a pivot rule which guarantees a polynomial number of steps? Jirka Fink Optimization methods 65

86 1 For example, the apex of the 3-dimensional k-side pyramid belongs to k faces, so there are ( k 3) bases determining the apex. 2 In degeneracy, the simplex method stay in the same vertex; and when the vertex is left, it is not visited again. 3 The Klee-Minty cube is a deformed n-dimensional cube with 2n facets and 2 n vertices. The Dantzig s original pivot rule (largest coefficient) visits all vertices of this cube. Jirka Fink Optimization methods 65

87 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Convex polyhedron 4 Simplex method 5 Duality of linear programming 6 Ellipsoid method 7 Matching Jirka Fink Optimization methods 66

88 Duality of linear programming: Example Find an upper bound for the following problem Maximize 2x 1 + 3x 2 subject to 4x 1 + 8x x 1 + x 2 3 3x 1 + 2x 2 4 x 1, x 2 0 Simple estimates 2x 1 + 3x 2 4x 1 + 8x x 1 + 3x 2 1 (4x x 2 ) 6 2 2x 1 + 3x 2 = 1 (4x x 2 + 2x 1 + x 2 ) 5 3 What is the best combination of conditions? Every non-negative linear combination of inequalities which gives an inequality d 1 x 1 + d 2 x 2 h with d 1 2 and d 2 3 provides the upper bound 2x 1 + 3x 2 d 1 x 1 + d 2 x 2 h. Jirka Fink Optimization methods 67

89 1 The first condition 2 A half of the first condition 3 A third of the sum of the first and the second conditions Jirka Fink Optimization methods 67

90 Duality of linear programming: Example Consider a non-negative combination y of inequalities Observations Maximize 2x 1 + 3x 2 subject to 4x 1 + 8x 2 12 / y 1 2x 1 + x 2 3 / y 2 3x 1 + 2x 2 4 / y 3 x 1, x 2 0 Every feasible solution x and non-negative combination y satisfies (4y 1 + 2y 2 + 3y 3 )x 1 + (8y 1 + y 2 + 2y 3 )x 2 12y 1 + 3y 2 + 4y 3. If 4y 1 + 2y 2 + 3y 3 2 and 8y 1 + y 2 + 2y 3 3, then 12y 1 + 2y 2 + 4y 3 is an upper for the objective function. Dual program 1 Minimize 12y 1 + 2y 2 + 4y 3 subject to 4y 1 + 2y 2 + 3y 3 2 8y 1 + y 2 + 2y 3 3 y 1, y 2, y 3 0 Jirka Fink Optimization methods 68

91 1 The primal optimal solution is x T = ( 1 2, 5 4 ) and the dual solution is y T = ( 5 16, 0, 1 4 ), both with the same objective value Jirka Fink Optimization methods 68

92 Duality of linear programming: General Primal linear program Maximize c T x subject to Ax b and x 0 Dual linear program Minimize b T y subject to A T y c and y 0 Weak duality theorem For every primal feasible solution x and dual feasible solution y hold c T x b T y. Corollary If one program is unbounded, then the other one is infeasible. Duality theorem Exactly one of the following possibilities occurs 1 Neither primal nor dual has a feasible solution 2 Primal is unbounded and dual is infeasible 3 Primal is infeasible and dual is unbounded 4 There are feasible solutions x and y such that c T x = b T y Jirka Fink Optimization methods 69

93 Dualization Every linear programming problem has its dual, e.g. Maximize c T x subject to Ax b and x 0 Primal program Maximize c T x subject to Ax b and x 0 Equivalent formulation Minimize b T y subject to A T y c and y 0 Dual program Minimize b T y subject to A T y c and y 0 Simplified formulation A dual of a dual problem is the (original) primal problem Minimize b T y subject to A T y c and y 0 Dual program -Maximize b T y subject to A T y c and y 0 Equivalent formulation -Minimize c T x subject to Ax b and x 0 Dual of the dual program -Minimize c T x subject to Ax b and x 0 Simplified formulation Maximize c T x subject to Ax b and x 0 The original primal program Jirka Fink Optimization methods 70

94 Dualization: General rules Maximizing program Minimizing program Variables x 1,..., x n y 1,..., y m Matrix A A T Right-hand side b c Objective function max c T x min b T y Constraints i-th constraint has y i 0 i-th constraint has y i 0 i-th constraint has = y i R x j 0 j-th constraint has x j 0 j-th constraint has x j R j-th constraint has = Jirka Fink Optimization methods 71

95 Linear programming: Feasibility versus optimality Feasibility versus optimality Finding a feasible solution of a linear program is computationally as difficult as finding an optimal solution. Using duality The optimal solutions of linear programs Primal: Maximize c T x subject to Ax b and x 0 Dual: Minimize b T y subject to A T y c and y 0 are exactly feasible solutions satisfying Ax b A T y c c T x b T y x, y 0 Jirka Fink Optimization methods 72

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Introduction to Linear and Combinatorial Optimization (ADM I)

Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

More information

Linear and Integer Optimization (V3C1/F4C1)

Linear and Integer Optimization (V3C1/F4C1) Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

Lecture 6 Simplex method for linear programming

Lecture 6 Simplex method for linear programming Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

III. Linear Programming

III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Cutting planes from extended LP formulations

Cutting planes from extended LP formulations Cutting planes from extended LP formulations Merve Bodur University of Wisconsin-Madison mbodur@wisc.edu Sanjeeb Dash IBM Research sanjeebd@us.ibm.com March 7, 2016 Oktay Günlük IBM Research gunluk@us.ibm.com

More information

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Resource Constrained Project Scheduling Linear and Integer Programming (1) DM204, 2010 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Resource Constrained Project Linear and Integer Programming (1) Marco Chiarandini Department of Mathematics & Computer Science University of Southern

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Lattice closures of polyhedra

Lattice closures of polyhedra Lattice closures of polyhedra Sanjeeb Dash IBM Research sanjeebd@us.ibm.com Oktay Günlük IBM Research gunluk@us.ibm.com April 10, 2017 Diego A. Morán R. Universidad Adolfo Ibañez diego.moran@uai.cl Abstract

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012 Discrete Geometry Austin Mohr April 26, 2012 Problem 1 Theorem 1 (Linear Programming Duality). Suppose x, y, b, c R n and A R n n, Ax b, x 0, A T y c, and y 0. If x maximizes c T x and y minimizes b T

More information

1 Maximal Lattice-free Convex Sets

1 Maximal Lattice-free Convex Sets 47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 3 Date: 03/23/2010 In this lecture, we explore the connections between lattices of R n and convex sets in R n. The structures will prove

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance.

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

The Triangle Closure is a Polyhedron

The Triangle Closure is a Polyhedron The Triangle Closure is a Polyhedron Amitabh Basu Robert Hildebrand Matthias Köppe November 7, 21 Abstract Recently, cutting planes derived from maximal lattice-free convex sets have been studied intensively

More information

Lift-and-Project Inequalities

Lift-and-Project Inequalities Lift-and-Project Inequalities Q. Louveaux Abstract The lift-and-project technique is a systematic way to generate valid inequalities for a mixed binary program. The technique is interesting both on the

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

Integer Hulls of Rational Polyhedra. Rekha R. Thomas

Integer Hulls of Rational Polyhedra. Rekha R. Thomas Integer Hulls of Rational Polyhedra Rekha R. Thomas Department of Mathematics, University of Washington, Seattle, WA 98195 E-mail address: thomas@math.washington.edu The author was supported in part by

More information

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 9 Tuesday, 4/20/10. Linear Programming UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1 Overview Motivation & Basics Standard & Slack Forms Formulating

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

The Triangle Closure is a Polyhedron

The Triangle Closure is a Polyhedron The Triangle Closure is a Polyhedron Amitabh Basu Robert Hildebrand Matthias Köppe January 8, 23 Abstract Recently, cutting planes derived from maximal lattice-free convex sets have been studied intensively

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Homework Assignment 4 Solutions

Homework Assignment 4 Solutions MTAT.03.86: Advanced Methods in Algorithms Homework Assignment 4 Solutions University of Tartu 1 Probabilistic algorithm Let S = {x 1, x,, x n } be a set of binary variables of size n 1, x i {0, 1}. Consider

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011 Contents 1 The knapsack problem 1 1.1 Complete enumeration..................................

More information

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Gérard Cornuéjols 1 and Yanjun Li 2 1 Tepper School of Business, Carnegie Mellon

More information

Numerical Optimization

Numerical Optimization Linear Programming Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on min x s.t. Transportation Problem ij c ijx ij 3 j=1 x ij a i, i = 1, 2 2 i=1 x ij

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Lattice closures of polyhedra

Lattice closures of polyhedra Lattice closures of polyhedra Sanjeeb Dash IBM Research sanjeebd@us.ibm.com Oktay Günlük IBM Research gunluk@us.ibm.com October 27, 2016 Diego A. Morán R. Universidad Adolfo Ibañez diego.moran@uai.cl Abstract

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47 Linear Programming Jie Wang University of Massachusetts Lowell Department of Computer Science J. Wang (UMass Lowell) Linear Programming 1 / 47 Linear function: f (x 1, x 2,..., x n ) = n a i x i, i=1 where

More information

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Intro: The Matching

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Lecture 6 - Convex Sets

Lecture 6 - Convex Sets Lecture 6 - Convex Sets Definition A set C R n is called convex if for any x, y C and λ [0, 1], the point λx + (1 λ)y belongs to C. The above definition is equivalent to saying that for any x, y C, the

More information

On the Rational Polytopes with Chvátal Rank 1

On the Rational Polytopes with Chvátal Rank 1 On the Rational Polytopes with Chvátal Rank 1 Gérard Cornuéjols Dabeen Lee Yanjun Li December 2016, revised May 2018 Abstract We study the following problem: given a rational polytope with Chvátal rank

More information

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 and Mixed Integer Programg Marco Chiarandini 1. Resource Constrained Project Model 2. Mathematical Programg 2 Outline Outline 1. Resource Constrained

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs Computational Integer Programming Universidad de los Andes Lecture 1 Dr. Ted Ralphs MIP Lecture 1 1 Quick Introduction Bio Course web site Course structure http://coral.ie.lehigh.edu/ ted/teaching/mip

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information