Optimization methods NOPT048

Size: px
Start display at page:

Download "Optimization methods NOPT048"

Transcription

1 Optimization methods NOPT048 Jirka Fink fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague Summer semester 2016/17 Last change on May 11, 2017 License: Creative Commons BY-NC-SA 4.0 Jirka Fink Optimization methods 1

2 Jirka Fink: Optimization methods Plan of the lecture Linear and integer optimization Convex sets and Minkowski-Weyl theorem Simplex methods Duality of linear programming Ellipsoid method Unimodularity Minimal weight maximal matching Matroid Cut and bound method Jirka Fink Optimization methods 2

3 Jirka Fink: Optimization methods General information Homepage fink/ Consultations Individual schedule Examination Tutorial conditions Tests Theoretical homeworks Practical homeworks Pass the exam Jirka Fink Optimization methods 3

4 Jirka Fink: Optimization methods Literature A. Schrijver: Theory of linear and integer programming, John Wiley, 1986 W. J.Cook, W. H. Cunningham, W. R. Pulleyblank, A. Schrijver: Combinatorial Optimization, John Wiley, 1997 J. Matoušek, B. Gärtner: Understanding and using linear programming, Springer, J. Matoušek: Introduction to Discrete Geometry. ITI Series , MFF UK, 2003 Jirka Fink Optimization methods 4

5 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 5

6 Example of linear programming: Optimized diet Express using linear programming the following problem Find the cheapest vegetable salad from carrots, white cabbage and cucumbers containing required amount the vitamins A and C and dietary fiber. Food Carrot White cabbage Cucumber Required per meal Vitamin A [mg/kg] mg Vitamin C [mg/kg] mg Dietary fiber [g/kg] g Price [EUR/kg] Formulation using linear programming Carrot White cabbage Cucumber Minimize 0.75x x x 3 Cost subject to 35x x x Vitamin A 60x x x 3 15 Vitamin C 30x x x 3 4 Dietary fiber x 1, x 2, x 3 0 Jirka Fink Optimization methods 6

7 Matrix notation of Linear programming problem Formulation using linear programming Minimize 0.75x x x 3 subject to 35x x x x x x x x x 3 4 x 1, x 2, x 3 0 Matrix notation Minimize Subject to a x 1, x 2, x T x 1 x 2 x x x x 3 4 Jirka Fink Optimization methods 7

8 Notation: Vector and matrix Matrix A matrix of type m n is a rectangular array of m rows and n columns of real numbers. Matrices are written as A, B, C, etc. Vector A vector is an n-tuple of real numbers. Vectors are written as c, x, y, etc. Usually, vectors are column matrices of type n 1. Scalar A scalar is a real number. Scalars are written as a, b, c, etc. Special vectors 0 and 1 are vectors of zeros and ones, respectively. Transpose The transpose of a matrix A is matrix A T created by reflecting A over its main diagonal. The transpose of a column vector x is the row vector x T. Jirka Fink Optimization methods 8

9 Notation: Matrix product Elements of a vector and a matrix The i-th element of a vector x is denoted by x i. The (i, j)-th element of a matrix A is denoted by A i,j. The i-th row of a matrix A is denoted by A i,. The j-th column of a matrix A is denoted by A,j. Dot product of vectors The dot product (also called inner product or scalar product) of vectors x, y R n is the scalar x T y = n i=1 x i y i. Product of a matrix and a vector The product Ax of a matrix A R m n of type m n and a vector x R n is a vector y R m such that y i = A i, x for all i = 1,..., m. Product of two matrices The product AB of a matrix A R m n and a matrix B R n k a matrix C R m k such that C,j = AB,j for all j = 1,..., k. Jirka Fink Optimization methods 9

10 Notation: System of linear equations and inequalities Equality and inequality of two vectors For vectors x, y R n we denote x = y if x i = y i for every i = 1,..., n and x y if x i y i for every i = 1,..., n. System of linear equations Given a matrix A R m n of type m n and a vector b R m, the formula Ax = b means a system of m linear equations where x is a vector of n real variables. System of linear inequalities Given a matrix A R m n of type and a vector b R m, the formula Ax b means a system of m linear inequalities where x is a vector of n real variables. Example: System of linear inequalities in two different notations ( ) 2x 1 + x 2 + x ( ) x x 2x 1 + 5x 2 + 5x x 3 Jirka Fink Optimization methods 10

11 Optimization Mathematical optimization is the selection of a best element (with regard to some criteria) from some set of available alternatives. Examples Minimize x 2 + y 2 where (x, y) R 2 Maximal matching in a graph Minimal spanning tree Shortest path between given two vertices Optimization problem Given a set of solutions M and an objective function f : M R, optimization problem is finding a solution x M with the maximal (or minimal) objective value f (x) among all solutions of M. Duality between minimization and maximization If min x M f (x) exists, then also max x M f (x) exists and min x M f (x) = max x M f (x). Jirka Fink Optimization methods 11

12 Linear Programming Linear programming problem A linear program is the problem of maximizing (or minimizing) a given linear function over the set of all vectors that satisfy a given system of linear equations and inequalities. Equation form: min c T x subject to Ax = b, x 0 Canonical form: max c T x subject to Ax b, where c R n, b R m, A R m n a x R n. Conversion from the equation form to the canonical form max c T x subject to Ax b, Ax b, x 0 Conversion from the canonical form to the equation form min c T x + c T x subject to Ax Ax + Ix = b, x, x, x 0 Jirka Fink Optimization methods 12

13 Terminology Basic terminology Number of variables: n Number of constrains: m Solution: an arbritrary vector x of R n Objective function: e.g. max c T x Feasible solution: a solution satisfying all constrains, e.g. Ax b Optimal solution: a feasible solution maximizing c T x Infeasible problem: a problem having no feasible solution Unbounded problem: a problem having a feasible solution with arbitrary large value of given objective function Polyhedron: a set of points x R n satisfying Ax b for some A R m n and b R m Polytope: a bounded polyhedron Jirka Fink Optimization methods 13

14 Example of linear programming: Network flow Network flow problem Given a direct graph (V, E) with capacities c R E and a source s V and a sink t V, find the maximal flow from s to t satisfying the flow conservation and capacity constrains. Formulation using linear programming Variables: flow f e for every edge e E Capacity constrains: 0 f c Flow conservation: uv E f uv = vw E f vw for every v V \ {s, t} Objective function: Maximize sw E f sw us E f us Matrix notation Add an auxiliary edge x ts with a sufficiently large capacity c ts Objective function: max x ts Flow conservation: Ax = 0 where A is the incidence matrix Capacity constrains: x c and x 0 Jirka Fink Optimization methods 14

15 Graphical method: Set of feasible solutions Example Draw the set of all feasible solutions (x 1, x 2 ) satisfying the following conditions. x 1 + 6x x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 Solution x 1 0 x 2 x 1 1 x 1 + 6x 2 15 x 2 0 (0, 0) 4x 1 x 2 10 Jirka Fink Optimization methods 15

16 Graphical method: Optimal solution Example Find the optimal solution of the following problem. Solution Maximize x 1 + x 2 x 1 + 6x x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 c T x = 2 c T x = 5 c T x = 0 c T x = 1 (3, 2) (1, 1) (0, 0) Jirka Fink Optimization methods 16

17 Graphical method: Multiple optimal solutions Example Find all optimal solutions of the following problem. Maximize x 2 x 1 + 6x x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 Solution c T x = 10 3 ( 1 6, 1) (0, 0) Jirka Fink Optimization methods 17

18 Graphical method: Unbounded problem Example Show that the following problem is unbounded. Maximize x 1 + x 2 x 1 + x 2 1 x 1, x 2 0 Solution (1, 1) (0, 0) Jirka Fink Optimization methods 18

19 Graphical method: Infeasible problem Example Show that the following problem has no feasible solution. Maximize x 1 + x 2 x 1 + x 2 2 x 1, x 2 0 Solution x 1 0 x 2 0 (0, 0) x 1 + x 2 2 Jirka Fink Optimization methods 19

20 Related problems Integer linear programming Integer linear programming problem is an optimization problem to find x Z n which maximizes c T x and satisfies Ax b where A R m n and b R m. Mix integer linear programming Some variables are integer and others are real. Binary linear programming Every variable is either 0 or 1. Complexity A linear programming problem is efficiently solvable, both in theory and in practice. The classical algorithm for linear programming is the Simplex method which is fast in practice but it is not known whether it always run in polynomial time. Polynomial time algorithms the ellipsoid and the interior point methods. No strongly polynomial-time algorithms for linear programming is known. Integer linear programming is NP-hard. Jirka Fink Optimization methods 20

21 Example of integer linear programming: Vertex cover Vertex cover problem Given an undirected graph (V, E), find the smallest set of vertices U V covering every edge of E; that is, U e for every e E. Formulation using integer linear programming Variables: cover x v {0, 1} for every vertex v V Covering: x u + x v 1 for every edge uv E Objective function: Minimize v V x v Matrix notation Variables: cover x {0, 1} V (i.e. 0 x 1 and x Z V ) Covering: A T x 1 where A is the incidence matrix Objective function: Minimize 1 T x Jirka Fink Optimization methods 21

22 Relation between optimal integer and relaxed solution Non-empty polyhedron may not contain an integer solution Integer feasible solution may not be obtained by rounding of a relaxed solution Integral optimum c Relaxed optimum Jirka Fink Optimization methods 22

23 Example: Ice cream production planning Problem description An ice cream manufacturer needs to plan production of ice cream for next year The estimated demand of ice cream for month i {1,..., n} is d i (in tons) Storage facilities for 1 ton of ice cream cost a per month Changing the production by 1 ton from month i 1 to month i cost b Produced ice cream cannot be stored longer than one month The total cost has to be minimized Jirka Fink Optimization methods 23

24 Example: Ice cream production planning Solution Variable x i determines the amount of produced ice cream in month i {0,..., n} Variable s i determines the amount of stored ice cream from month i 1 month i The stored quantity is computed by s i = s i 1 + x i d i for every i {1,..., n} Durability is ensured by s i d i for all i {1,..., n} Non-negativity of the production and the storage x, s 0 Objective function min b n i=1 x i x i 1 + a n i=1 s i is non-linear Let y i 0 and z i 0 be the increment and the decrement of production, reps., and x i x i 1 = y i z i Linear programming problem formulation Minimize b n i=1 (y i + z i ) + a n i=1 s i subject to s i 1 s i + x i = d i for i {1,..., n} s i d i for i {1,..., n} x, s, y, z 0 We can bound the initial and final amount of ice cream s 0 a s n and also bound the production x 0 Jirka Fink Optimization methods 24

25 Finding shortest paths from a vertex s in an oriented graph Linear programming problem Maximize u V x u subject to x v x u c uv for every edge uv x s = 0 Proof (the optimal solution x u gives the distance from s to u u V ) 1 Let y u be the length of the shortest path from s to u 2 It holds that y x Let P be edges on the shortest path from s to z y z = uv P c uv uv x v x u = x z y s = x z 3 It holds that y = x For the sake of contradiction assume that y x So y x and u V y u > u V x u But y is a feasible solution and x is an optimal solution Jirka Fink Optimization methods 25

26 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 26

27 Linear space Definition: Linear (vector) space A set (V, +, ) is called a linear (vector) space over a field T if + : V V V i.e. V is closed under addition + : T V V i.e. V is closed under multiplication by T (V, +) is an Abelian group For every x V it holds that 1 x = x where 1 T For every a, b T and every x V it holds that (ab) x = a (b x) For every a, b T and every x V it holds that (a + b) x = a x + b x For every a T and every x, y V it holds that a (x + y) = a x + a y Observation If V is a linear space and L V, then L is a linear space if and only if 0 L, x + y L for every x, y L and αx L for every x L and α T. Jirka Fink Optimization methods 27

28 Linear and affine spaces in R n Observation A non-empty set V R n is a linear space if and only if αx + βy V for all α, β R, x, y V. Definition If V R n is a linear space and a R n is a vector, then V + a is called an affine space where V + a = {x + a; x V }. Basic observations If L R n is an affine space, then L + x is an affine space for every x R n. If L R n is an affine space, then L x is a linear space for every x L. 1 If L R n is an affine space, then L x = L y for every x, y L. 2 An affine space L R n is linear if and only if L contains the origin 0. 3 System of linear equations The set of all solutions of Ax = 0 is a linear space and every linear space is the set of all solutions of Ax = 0 for some A. 4 The set of all solutions of Ax = b is an affine space and every affine space is the set of all solutions of Ax = b for some A and b, assuming Ax = b is consistent. 5 Jirka Fink Optimization methods 28

29 1 By definition, L = V + a for some linear space V and some vector a R n. Observe that L x = V + (a x) and we prove that V + (a x) = V which implies that L x is a linear space. There exists y V such that x = y + a. Hence, a x = a y a = y V. Since V is closed under addition, it follows that V + (a x) V. Similarly, V (a x) V which implies that V V + (a x). Hence, V = V + (a x) and the statement follows. 2 We proved that L = V + a for some linear space V R n and some vector a R n and L x = V + (a x) = V for every x L. So, L x = V = L y. 3 Every linear space must contain the origin by definition. For the opposite implication, we set x = 0 and apply the previous statement. 4 If V is a linear space, then we can obtain rows of A from the basis of the orthogonal space of V. 5 If L is an affine space, then L = V + a for some vector space V and some vector a and there exists a matrix A such that V = {x; Ax = 0}. Hence, V + a = {x + a; Ax = 0} = {y; Ay Aa = 0} = {y; Ay = b} where we substitute x + a = y and set b = Aa. If L = {x; Ax = b} is non-empty, then let y be an arbitrary vertex of L. Furthermore, L y = {x y; Ax = b} = {z; Ay + Az = b} = {z; Az = 0} is a linear space since Ay = b. Jirka Fink Optimization methods 28

30 Convex set Observation A set S R n is an affine space if and only if S contains whole line given every two points of S. Definition A set S R n is convex if S contains whole segment between every two points of S. Example a u b v Jirka Fink Optimization methods 29

31 Linear, affine and convex hulls Observation The intersection of linear spaces is also a linear space. 1 The non-empty intersection of affine spaces is an affine space. 2 The intersection of convex sets is also a convex set. 3 Definition Let S R n be an non-empty set. The linear hull span(s) of S is the intersection of all linear sets containing S. The affine hull aff(s) of S is the intersection of all affine sets containing S. The convex hull conv(s) of S is the intersection of all convex sets containing S. Observation Let S R n be an non-empty set. A set S is linear if and only if S = span(s). 4 A set S is affine if and only if S = aff(s). 5 A set S is convex if and only if S = conv(s). 6 span(s) = aff(s {0}) Jirka Fink Optimization methods 30

32 1 Use definition and logic. 2 Let L i be affine space for i in an index set I and L = i I L i and a L. We proved that L a = i I (L i a) is a linear space which implies that L is an affine space. 3 Use definition and logic. 4 Similar as the convex version. 5 Similar as the convex version. 6 We proved that conv(s) is convex, so if S = conv(s), then S is convex. In order to prove that S = conv(s) if S is convex, we observe that conv(s) S since conv(s) = M S,M convex and S is included in this intersection. Similarly, conv(s) S since every M in the intersection contains S. Jirka Fink Optimization methods 30

33 Linear, affine and convex combinations Definition Let v 1,..., v k be vectors of R n where k is a positive integer. The sum k i=1 α iv i is called a linear combination if α 1,..., α k R. The sum k i=1 α iv i is called an affine combination if α 1,..., α k R k i=1 α i = 1. The sum k i=1 α iv i is called a convex combination if α 1,..., α k 0 and k i=1 α i = 1. Lemma Let S R n be a non-empty set. The set of all linear combinations of S is a linear space. 1 The set of all affine combinations of S is an affine space. 2 The set of all convex combinations of S is a convex set. 3 Lemma A linear space S contains all linear combinations of S. 4 An affine space S contains all affine combinations of S. 5 A convex set S contains all convex combinations of S. 6 Jirka Fink Optimization methods 31

34 1 We have to verify that the set of all linear combinations has closure under addition and multiplication by scalars. In order to verify the closure under multiplication, let k i=1 α iv i be a linear combination of S and c R be a scalar. Then, c k i=1 α iv i = k i=1 (cα i)v i is a linear combination of of S. Similarly, the set of all linear combinations has closure under addition and it contains the origin. 2 Similar as the convex version: Show that S contains whole line defined by arbitrary pair of points of S. 3 Let k i=1 α iu i and l j=1 β jv j be two convex combinations of S. In order to prove that the set of all convex combinations of S contains the line segment between k i=1 α iu i and l j=1 β jv j, let us consider γ 1, γ 2 0 such that γ 1 + γ 2 = 1. Then, γ k 1 i=1 α iu i + γ l 2 j=1 β jv j = k i=1 (γ 1α i )u i + l j=1 (γ 2β j )v j is a convex combination of S since (γ 1 α i ), (γ 2 β j ) 0 and k i=1 (γ 1α i ) + l j=1 (γ 2β j ) = 1. 4 Similar as the convex version. 5 Let k i=1 α iv i be an affine combination of S. Since S v k is a linear space, the linear combination k i=1 α i(v i v k ) of S v k belongs into S v k. Hence, v k + k i=1 α i(v i v k ) = k i=1 α iv i belongs to S. 6 We prove by induction on k that S contains every convex combination k i=1 α iv i of S. The statement holds for k 2 by the definition of a convex set. Let k i=1 α iv i be a convex combination of k vectors of S and we assume that α k < 1, otherwise α 1 = = α k 1 = 0 so k i=1 α iv i = v k S. Hence, k i=1 α iv i = (1 α k ) k i=1 α i 1 α k v i + α k v k = (1 α k )y + α k v k where we observe Jirka Fink Optimization methods 31

35 that y := k i=1 α i 1 α k v i is a convex combination of k 1 vectors of S which by induction belongs to S. Furthermore, (1 α k )y + α k v k is a convex combination of S which by induction also belongs to S. Jirka Fink Optimization methods 31

36 Linear, affine and convex combinations Theorem Let S R n be a non-empty set. The linear hull of a set S is the set of all linear combinations of S. 1 The affine hull of a set S is the set of all affine combinations of S. 2 The convex hull of a set S is the set of all convex combinations of S. 3 Jirka Fink Optimization methods 32

37 1 Similar as the convex version. 2 Similar as the convex version. 3 Let T be the set of all convex combinations of S. First, we prove that conv(s) T. The definition states that conv(s) = M S,M convex M and we proved that T is a convex set containing S, so T is included in this intersection which implies that conv(s) is a subset of T. In order to prove conv(s) T, we again consider the intersection conv(s) = M S,M convex M. We proved that a convex set M contains all convex combinations of M which implies that if M S then M also contains all convex combinations of S. So, in this intersection every M contains T which implies that conv(s) T. Jirka Fink Optimization methods 32

38 Independence and base Definition A set of vectors S R n is linearly independent if no vector of S is a linear combination of other vectors of S. A set of vectors S R n is affinely independent if no vector of S is an affine combination of other vectors of S. Observation (Homework) Vectors v 1,..., v k R n are linearly dependent if and only if there exists a non-trivial combination α 1,..., α k R such that k i=1 α iv i = 0. Vectors v 1,..., v k R n are affinely dependent if and only if there exists a non-trivial combination α 1,..., α k R such that k i=1 α iv i = 0 a k i=1 α i = 0. Observation Vectors v 0,..., v k R n are affinely independent if and only if vectors v 1 v 0,..., v k v 0 are linearly independent. 1 Vectors v 1,..., v k R n are linearly independent if and only if vectors 0, v 1,..., v k are affinely independent. 2 Jirka Fink Optimization methods 33

39 1 If vectors v 1 v 0,..., v k v 0 are linearly dependent, then there exists a non-trivial combination α 1,..., α k R such that k i=1 α i(v i v 0 ) = 0. In this case, 0 = k i=1 α i(v i v 0 ) = k i=1 α iv i v k 0 i=1 α i = k i=0 α iv i is a non-trivial affine combination with k i=0 α i = 0 where α 0 = k i=1 α i. f v 0,..., v k R n are affinely dependent, then there exists a non-trivial combination α 0,..., α k R such that k i=0 α iv i = 0 a k i=0 α i = 0. In this case, 0 = k i=0 α iv i = α 0 v 0 + k i=1 α iv i = k i=1 α i(v i v 0 ) is a non-trivial linear combination of vectors v 1 v 0,..., v k v 0. 2 Use the previous observation with v 0 = 0. Jirka Fink Optimization methods 33

40 Basis Definition Let B R n and S R n. B is a base of a linear space S if B are linearly independent and span(b) = S. B is an base of an affine space S if B are affinely independent and aff(b) = S. Observation All linear bases of a linear space have the same cardinality. All affine bases of an affine space have the same cardinality. 1 Observation Let S be a linear space and B S \ {0}. Then, B is a linear base of S if and only if B {0} is an affine base of S. Definition The dimension of a linear space is the cardinality of its linear base. The dimension of an affine space is the cardinality of its affine base minus one. The dimension dim(s) of a set S R n is the dimension of affine hull of S. Jirka Fink Optimization methods 34

41 1 For the sake of contradiction, let a 1,..., a k and b 1,..., b l be two basis of an affine space L = V + x where V a linear space and l > k. Then, a 1 x,..., a k x and b 1 x,..., b l x are two linearly independent sets of vectors of V. Hence, there exists i such that a 1 x,..., a k x, b i x are linearly independent, so a 1,..., a k, b i are affinely independent. Therefore, b i cannot be obtained by an affine combination of a 1,..., a k and b i / aff(a 1,..., a k ) which contradicts the assumption that a 1,..., a k is a basis of L. Jirka Fink Optimization methods 34

42 Carathéodory Theorem (Carathéodory) Let S R n. Every point of conv(s) is a convex combinations of affinely independent points of S. 1 Corollary Let S R n be a set of dimension d. Then, every point of conv(s) is a convex combinations of at most d + 1 points of S. Jirka Fink Optimization methods 35

43 1 Let x conv(s). Let x = k i=1 α ix i be a convex combination of points of S with the smallest k. If x 1,..., x k are affinely dependent, then there exists a combination 0 = β i x i such that β i = 0 and β 0. Since this combination is non-trivial, there exists j such that β j > 0 and α j β j x = i j γ i x i i j γ i = 1 γ i 0 for all i j which contradicts the minimality of k. is minimal. Let γ i = α i α j β i. Observe that β j Jirka Fink Optimization methods 35

44 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 36

45 Notation Notation used in the Simplex method Equation form: Maximize c T x such that Ax = b and x 0 where A R m n and b R m. We assume that rows of A are linearly independent. For a subset B {1,..., n}, let A B be the matrix consisting of columns of A whose indices belong to B. Similarly for vectors, x B denotes the coordinates of x whose indices belong to B. The set N = {1,..., n} \ B denotes the remaining columns. Example Consider B = {2, 4}. Then, N = {1, 3, 5} and ( ) ( ) A = A B = 4 9 A N = ( 1 5 ) Note that Ax = A B x B + A N x N. x T = (3, 4, 6, 2, 7) x T B = (4, 2) x T N = (3, 6, 7) Jirka Fink Optimization methods 37

46 Geometrical interpretation of basis solutions 1 For a system Ax = b with n variables and n linearly independent conditions, there exists the inverse matrix A 1 and the only feasible solution of Ax = b is x = A 1 b. 2 Consider a system Ax b with n = rank(a) variables and m n conditions and select n linearly independent rows A x b. Then, the system A x = b has a solution x = A 1 b. Moreover, if Ax b, then x is a vertex of the polyhedron Ax b. 1 3 Consider the equation form Ax = b and x 0 and let N be n m rows of x 0. If rows of the system Ax = 0 and x N = 0 are linearly independent, then b = Ax = A B x B + A N x N = A B x B, so x = (x B, x N) = (A 1 B b, 0) where B = {1,..., n} \ N. Moreover, if x B 0, then x is a vertex of Ax = b and x 0. 4 Consider the equation form again. If we choose m linearly independent columns B of A, then conditions Ax = b and x N = 0 are linearly independent. Jirka Fink Optimization methods 38

47 1 The solution x = A 1 b will be called a basis solution. Vertices of a polyhedron will be formally defined later, so we use a geometrical intuition now. Jirka Fink Optimization methods 38

48 Basic feasible solutions Definitions Consider the equation form Ax = b and x 0 with n variables and rank(a) = m rows. A set B {1,..., n} of linearly independent columns of A is called a basis. 1 The basic solution x corresponding to a basis B is x N = 0 and x B = A 1 B b. A basic solution satisfying x 0 is called a basic feasible solution. x B are called basis variables and x N are called non-basis variables. 2 Observation A feasible solution x of a Ax = b and x 0 is basis if and only if columns of A K are linearly independent where K = {j {1,..., n} ; x j > 0}. 3 4 Observation Linear program in the equation form has at most ( ) n m basis solutions. 5 Jirka Fink Optimization methods 39

49 1 Observe that B {1,..., n} is a basis if and only if A B is a regular matrix. 2 Remember that non-basis variables are always equal to zero. 3 If x is a basic feasible solution and B is the corresponding basis, then x N = 0 and so K B which implies that columns of A K are also linearly independent. If columns of A K are linearly independent, then we can extend K into B by adding columns of A so that columns of A B are linearly independent which implies that B is a basis of x. 4 Note that basis variables can also be zero. In this case, the basis B corresponding to a basis solution x may not be unique since there may be many ways to extend K into a basis B. This is called degeneracy. 5 There are ( n m) subsets B {1,..., n} and for some of these subsets AB may not be regular. Jirka Fink Optimization methods 39

50 Optimal basis feasible solutions Theorem If the linear program max c T x subject to Ax = b and x 0 has a feasible solution and the objective function is bounded from above of the set of all feasible solutions, then there exists an optimal solution. Moreover, if an optimal solution exists then there is a basis feasible solution which is optimal. 1 Lemma If the objective function of a linear program in the equation form is bounded above, then for every feasible solution x there exists a basis feasible solution x with the same or larger value of the objective function, i.e. c T x c T x. 2 Jirka Fink Optimization methods 40

51 1 If the problem is bounded, one may try to find the optimal solution by finding all basis feasible solutions. However, this is not an efficient algorithm since the number of basis grows exponentially. 2 Let x{ be a feasible solution with } c T x c T x and the smallest possible size of the set K = j {1,..., n} ; x j > 0. Let N = {1,..., n} \ K. If columns of A K are linearly independent, then x is a basis solution. There exists a non-zero vector v K such that A K v K = 0. Let v N = 0. WLOG: c T v 0 since we can replace v by v. Consider the line x(t) = x + tv for t R. For every t R: Ax(t) = b and (x(t)) N = 0. For every t 0: c T x(t) c T x. If c T v > 0 and v 0, then points x(t) are feasible for every t 0 and the objective function c T x(t) = c T x + tc T v converges to infinity which contradicts assuptions. x j If v j < 0 for some j K, then consider j K with v j < 0 and minimal. Let v j t = x j v j. Since x( t) 0 and (x( t)) j = 0, the solution x( t) is feasible with smaller number of positive components than x which is a contradiction. The remaining case is c T v = 0 and v j 0. Since v K is a non-trivial combination, there exists j K with v j > 0. Replace v by v and apply the previous case. Jirka Fink Optimization methods 40

52 Convex polyhedrons Definition A hyperplane is a set { x R n ; a T x = b } where a R n \ {0} and b R. A half-space is a set { x R n ; a T x b } where a R n \ {0} and b R. A polyhedron is an intersection of finitely many half-spaces. A polytope is a bounded polyhedron. Observation For every a R n and b R, the set of all x R n satisfying a T x b is convex. Corollary Every polyhedron Ax b is convex. Examples n-dimensional hypercube: {x R n ; 0 x 1} n-dimensional crosspolytope: { x R n ; n i=1 x i 1 } 1 n-dimensional simplex: { x R n+1 ; x 0, 1x = 1 } 2 Jirka Fink Optimization methods 41

53 1 Formally, n i=1 x i 1 is not a linear inequality. However, it can be replaced by 2 n linear inequalities dx 1 for all d { 1, 1} n. 2 n-dimensional simplex is a convex hull of n + 1 affinely independent points. Jirka Fink Optimization methods 41

54 Faces of a polyhedron Definition Let P be a polyhedron. A half-space α T x β is called a supporting hyperplane of P if the inequality α T x β holds for every x P and the hyperplane α T x = β has a non-empty intersection with P. Definition If α T x β is a supporting hyperplane of a polyhedron P, then P { x; α T x = β } is called a face of P. By convention, the empty set and P are also called faces, and the other faces are proper faces. 1 Definition Let P be a d-dimensional polyhedron. A 0-dimensional face of P is called a vertex of P. A 1-dimensional face is of P called an edge of P. A (d 1)-dimensional face of P is called an facet of P. Jirka Fink Optimization methods 42

55 1 Observe, that every face of a polyhedron is also a polyhedron. Jirka Fink Optimization methods 42

56 Vertices Observations The set of all optimal solutions of a linear program max c T x over a polyhedron P is a face of P. 1 Every proper face of P is a set of all optimal solutions of a linear program max cx over a polyhedron P for some c R n. 2 3 Vertices are unique solutions of linear programs max c T x over P for some c. Theorem Let P be the set of all solutions of a linear program in the equation form and v P. Then the following statements are equivalent. 1 v is a vertex of a polyhedron P. 2 v is a basis feasible solution of the linear program. 4 Theorem If the linear program max c T x subject to Ax = b and x 0 has a feasible solution and the objective function is bounded from above of the set of all feasible solutions, then there exists an optimal solution. Moreover, if an optimal solution exists then there is a basis feasible solution which is optimal. Jirka Fink Optimization methods 43

57 1 Let F be the set of all optimal solutions. If F = or F = P, then F is a face of P by definition. Otherwise, d = max { c T x; x P } exists. Since c T x = d is a supporting hyperplane of P and F = P { x; c T x = d }, it follows that F is a face of P. 2 A proper face F of P is defined as the intersection of P and a supporting hyperplane c T x = d, so F is the set of all optimal solutions of the linear program max c T x over P. 3 Note that P is also the set of all optimal solutions of a linear program for c = 0. On the other hand, if P is non-empty and bounded, then the empty set cannot be express as a set of all optimal solutions for any c. 4 Follows from the following theorem. Let B be the basis defining v and let c B = 0 and c N = 1. Then c T v = c T B v B + c T N v N = 0 and for every feasible x it holds holds that x 0, so c T x 0. Hence, v is a optimal solution of the linear program with the objective function max c T x. Furthermore, v is the only optimal solution since every optimal solution x must satisfy x N = 0. In this case, x B = A 1 b is unique. B Jirka Fink Optimization methods 43

58 Example: Initial simplex tableau Canonical form Equation form Simplex tableau Maximize x 1 + x 2 subject to x 1 + x 2 1 x 1 3 x 2 2 x 1, x 2 0 Maximize x 1 + x 2 subject to x 1 + x 2 + x 3 = 1 x 1 + x 4 = 3 x 2 + x 5 = 2 x 1, x 2, x 3, x 4, x 5 0 x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 Jirka Fink Optimization methods 44

59 Example: Initial simplex tableau Simplex tableau Initial basic feasible solution Pivot B = {3, 4, 5}, N = {1, 2} x = (0, 0, 1, 3, 2) Two edges from the vertex (0, 0, 1, 3, 2): x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 1 (t, 0, 1 + t, 3 t, 2) when x 1 is increased by t 2 (0, r, 1 r, 3, 2 r) when x 2 is increased by r These edges give feasible solutions for: 1 t 3 since x 3 = 1 + t 0 and x 4 = 3 t 0 and x 5 = r 1 since x 3 = 1 r 0 and x 4 = 3 0 and x 5 = 2 r 0 In both cases, the objective function is increasing. We choose x 2 as a pivot. Jirka Fink Optimization methods 45

60 Example: Pivot step Simplex tableau Basis Original basis B = {3, 4, 5} x 2 enters the basis (by our choice). x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 (0, r, 1 r, 3, 2 r) is feasible for r 1 since x 3 = 1 r 0. Therefore, x 3 leaves the basis. New base B = {2, 4, 5} New simplex tableau x 2 = 1 + x 1 x 3 x 4 = 3 x 1 x 5 = 1 x 1 + x 3 z = 1 + 2x 1 x 3 Jirka Fink Optimization methods 46

61 Example: Next step Simplex tableau Next pivot x 2 = 1 + x 1 x 3 x 4 = 3 x 1 x 5 = 1 x 1 + x 3 z = 1 + 2x 1 x 3 Basis B = {2, 4, 5} with a basis feasible solution (0, 1, 0, 3, 1). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (t, 1 + t, 0, 3 t, 1 t). Feasible solutions for x 2 = 1 + t 0 and x 4 = 3 t 0 and x 5 = 1 t 0. Therefore, x 1 enters the basis and x 5 leaves the basis. New simplex tableau x 1 = 1 + x 3 x 5 x 2 = 2 x 5 x 4 = 2 x 3 + x 5 z = 3 + x 3 2x 5 Jirka Fink Optimization methods 47

62 Example: Last step Simplex tableau Next pivot x 1 = 1 + x 3 x 5 x 2 = 2 x 5 x 4 = 2 x 3 + x 5 z = 3 + x 3 2x 5 Basis B = {1, 2, 4} with a basis feasible solution (1, 2, 0, 2, 0). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (1 + t, 2, t, 2 t, 0). Feasible solutions for x 1 = 1 + t 0 and x 2 = 2 0 and x 4 = 2 t 0. Therefore, x 3 enters the basis and x 4 leaves the basis. New simplex tableau x 1 = 3 x 4 x 2 = 2 x 5 x 3 = 2 x 4 + x 5 z = 5 x 4 x 5 Jirka Fink Optimization methods 48

63 Example: Optimal solution Simplex tableau x 1 = 3 x 4 x 2 = 2 x 5 x 3 = 2 x 4 + x 5 z = 5 x 4 x 5 No other pivot Basis B = {1, 2, 3} with a basis feasible solution (3, 2, 2, 0, 0). This vertex has two incident edges but no one increases the objective function. We have an optimal solution. Why this is an optimal solution? Consider an arbitrary feasible solution ỹy. The value of objective function is z = 5 ỹy 4 ỹy 5. Since ỹy 4, ỹy 5 0, the objective value is z = 5 ỹy 4 ỹy 5 5 = z. Jirka Fink Optimization methods 49

64 Example: Unboundedness Canonical form Maximize x 1 subject to x 1 x 2 1 x 1 + x 2 2 x 1, x 2 0 Equation form Maximize x 1 subject to x 1 x 2 + x 3 = 1 x 1 + x 2 + x 4 = 2 x 1, x 2, x 3, x 4 0 Initial simplex tableau x 3 = 1 x 1 + x 2 x 4 = 2 + x 1 x 2 z = x 1 Jirka Fink Optimization methods 50

65 Example: Unboundedness Simplex tableau x 3 = 1 x 1 + x 2 x 4 = 2 + x 1 x 2 z = x 1 First pivot Basis B = {3, 4} with a basis feasible solution (0, 0, 1, 2). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (t, 0, 1 t, 2 + t). Feasible solutions for x 3 = 1 t 0 and x 4 = 2 + t 0. Therefore, x 1 enters the basis and x 3 leaves the basis. Simplex tableau x 1 = 1 + x 2 x 3 x 4 = 3 x 3 z = 1 + x 2 x 3 Jirka Fink Optimization methods 51

66 Example: Unboundedness Simplex tableau x 1 = 1 + x 2 x 3 x 4 = 3 x 3 z = 1 + x 2 x 3 Unboundedness Basis B = {1, 4} with a basis feasible solution (1, 0, 0, 3). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (1 + t, t, 0, 3). Every point (1 + t, t, 0, 3) for t 0 is feasible. The value of the objective function is 1 + t. Therefore, this problem is unbounded. Jirka Fink Optimization methods 52

67 Example: Degeneracy Canonical form Maximize x 2 subject to x 1 + x 2 0 x 1 2 x 1, x 2 0 Convert to the equation form by adding slack variables x 3 and x 4. Same solution with different basis x 2 = x 1 x 3 x 4 = 2 x 1 z = x 1 x 3 Basis feasible solution (0, 0, 0, 2) with the basis {2, 4}. Initial simplex tableau x 3 = x 1 x 2 x 4 = 2 x 1 z = x 2 Basis feasible solution (0, 0, 0, 2) with the basis {3, 4}. Optimal simplex tableau x 1 = 2 x 4 x 2 = 2 x 3 x 4 z = 2 x 3 x 4 Optimal solution (2, 2, 0, 0) with the basis {1, 2}. Jirka Fink Optimization methods 53

68 Simplex tableau in general Definition A simplex tableau determined by a feasible basis B is a system of m + 1 linear equations in variables x 1,..., x n, and z that has the same set of solutions as the system Ax = b, z = c T x, and in matrix notation looks as follows: x B = p + Qx N z = z 0 + r T x N where x B is the vector of the basis variables, x N is the vector on non-basis variables, p R m, r R n m, Q is an m (n m) matrix, and z 0 R. Example x 3 = 5 + x 1 x 2 x 4 = 2 x 1 z = 3 + x 1 + 2x 2 ( ) 1 1 Q = 1 0 ( ) ( ) x x B = 3 x, x N = 1, p = x 4 z 0 = 3, r T = (1, 2) x 2 ( ) 5 2 Jirka Fink Optimization methods 54

69 Simplex tableau in general Definition A simplex tableau determined by a feasible basis B is a system of m + 1 linear equations in variables x 1,..., x n, and z that has the same set of solutions as the system Ax = b, z = c T x, and in matrix notation looks as follows: x B = p + Qx N z = z 0 + r T x N where x B is the vector of the basis variables, x N is the vector on non-basis variables, p R m, r R n m, Q is an m (n m) matrix, and z 0 R. Observation For each basis B there exists exactly one simplex tableau, and it is given by Q = A 1 B A N p = A 1 B b z 0 = c T BA 1 B b r = c N (c T BA 1 B A N) T 1 Jirka Fink Optimization methods 55

70 1 Since A B x B + A N x N = b and A B is a regular matrix, it follows that x B = A 1 B b A 1 B A Nx N where A 1 b = p and A 1 B B A N = Q. The objective function is c T B x B + c T N x N = c T B A 1 B b (ct B A 1 B A N + c T N )x N, where c T B A 1 B b = z 0 and c T B A 1 B A N + c T N = r T. Jirka Fink Optimization methods 55

71 Properties of a simplex tableau Simplex tableau in general Observation Basis B is feasible if and only if p 0. Observation x B = p + Qx N z = z 0 + r T x N If r 0, then the solution corresponding to a basis B is optimal. Idea of the pivot step Choose v N. Which is the last feasible point of the half-line x(t) for t 0 where x v(t) = t x N\{v} (t) = 0 x B (t) = p + Q,vt? Observation If there exists a non-basis variable x v such that r v > 0 and Q,v 0, then the problem is unbounded. Jirka Fink Optimization methods 56

72 Pivot step Simplex tableau in general x B = p + Qx N z = z 0 + r T x N Find a pivot If r 0, then we have an optimal solution. Otherwise, choose an arbitrary entering variable x v such that r v > 0. If Q,v 0, then the problem is also unbounded. Otherwise, find a leaving variable x u which limits the increment of the entering variable most strictly, i.e. Q u,v < 0 and p u Q u,v is minimal. Jirka Fink Optimization methods 57

73 Pivot rules Pivot rules Largest coefficient Choose an improving variable with the largest coefficient. Largest increase Choose an improving variable that leads to the largest absolute improvement in z, e.i. c T (x new x old ) is maximal. Steepest edge Choose an improving variable whose entering into the basis moves the current basic feasible solution in a direction closest to the direction of the vector c, i.e. c T (x new x old ) x new x old Bland s rule Choose an improving variable with the smallest index, and if there are several possibilities of the leaving variable, also take the one with the smallest index. Random edge Select the entering variable uniformly at random among all improving variables. Jirka Fink Optimization methods 58

74 Pivot step Simplex tableau in general Gaussian elimination x B = p + Qx N z = z 0 + r T x N New basis variables are (B \ {u}) {v} and new non-basis variables are (N \ {v}) {u} Row x u = p u + Q u,vx v + j N\{v} Q u,jx j is replaced by row x v = p u Q u,v + 1 Q u,v x u + Q u,j j N\{v} Q u,v x j. Rows x i = p i + Q i,v x v + j N\{v} Q i,jx j for i B \ {u} are replaced by rows x i = (p i + Q i,v Q u,v p u ) + Q i,v Q u,v x u + j N\{v} (Q i,j + Q u,j Q i,v Q u,v )x j. Objective function z = z 0 + r vx v + j N\{v} r j x j is replaced by objective function z = (z 0 + Observation p u Q u,v ) + r v Q u,v x u + j N\{v} (r j + r v Q i,v Q u,v )x j. Pivot step does not change the set of all feasible solutions. Jirka Fink Optimization methods 59

75 Bland s rule Simplex tableau in general x B = p + Qx N z = z 0 + r T x N Observation Let B is a basis with the corresponding solution x and let B a new basis with the corresponding solution x after a single pivot step. Then, x = x or c T x < c T x. 1 Observation If the simplex method loops endlessly, then basis occuring in the loop correspond to the same vertex. 2 Theorem The simplex method with Bland s pivot rule is always finite. 3 Jirka Fink Optimization methods 60

76 1 Consider the half-line x(t) providing the pivot step and let t = max {t 0; x(t) 0}. Clearly, c T x = x( t). If t = 0, then x = x(0) = x. If t > 0, then c T x = c T x( t) = z 0 + r v t > z 0 = c T x since r t > 0. 2 Consider that the simplex method iteraters over basis B ( 1),..., B (k), B (k+1) = B (1) with the corresponding solutions x (1),..., x (k), x (k+1) = x (1). By the previous observation holds that c T x (1) c T x (2) c T x (k) c T x (k+1) = c T x (1). Hence, c T x (1) = = c T x (k+1) and the previous observation implies that x (1) = = x (k+1). 3 For the sake of contradiction, we assume that the simplex method with Bland s pivot rule loops endlessly. Consider all basis in the loop. Let F be the set of all entering variables and let x v F be the variable with largest index. Let B be a basis in the loop just before x v enters. Note that variables of B \ F and N \ B are always basis and non-basis variables during the loop, respectively. Consider the following auxiliary problem. Maximize c T x subject to Ax = b x F \{v} 0 x v 0 x N\F = 0 x B\F R B\F ( ) We prove that ( ) has an optimal solution and it is also unbounded which is a contradiction. Jirka Fink Optimization methods 60

77 r v > 0 since x v is the entering variable r i 0 for every i (F N) \ {v} since x v is the improving variable with the smallest index (Bland s rule) For every solution x satisfying ( ) holds that c T x = z 0 + r T x N = z 0 + r T v x v + r T (F N)\{v} x (F N)\{v} + r T N\F x N\F z 0. Hence, the solution corresponding to the basis B is an optimal solution to ( ). Now, we prove that ( ) is unbounded. Let B be a basis in the loop just before x v leaves and let Q, p and r be the parameter of the simplex tableau corresponding to B. Let x u be the entering variable. Hence, r u > 0. Q v,u < 0 since v is the leaving variable. From Bland s rule it follows that Q i,u 0 for every i (F B ) \ {v} p F B = 0 since degenerated basis variables are zero Consider the half-line x(t) for t 0 where x u(t) = t and x N \{v}(t) = 0 and x B (t) = p + Q,v t. x (F N )\{u}(t) = 0 since non-basis variables remains zero x i (t) = p i + Q i,u t 0 for every i (F B ) \ {v} Hence, x F \{v} (t) 0 x v (t) = p v + Q v,u t 0 x N \F (t) = 0 since non-basis variables remains zero Hence, x(t) satisfies ( ) for every t 0 r u > 0 since x u is the entering variable c T x(t) = z 0 + r ut for t Hence, ( ) is unbounded. Jirka Fink Optimization methods 60

78 Initial feasible basis Linear programming problem in the equation form Maximize c T x subject to Ax = b and x 0. Assume that b 0 1 Auxiliary problem We add auxiliary variables y R m to obtain the auxiliary problem maximize y 1 y m subject to Ax + Iy = b a x, y 0. Observation Initial feasible basis for the auxiliary problem is B = {y 1,..., y m } with the initial tableau Observation The following statements are equivalent y = b Ax z = 1 T b + (1 T A)x 1 The original problem max { c T x; Ax = b, x 0 } has a feasible solution. 2 Optimal value of the objective function of the auxiliary problem is 0. 3 Auxiliary problem has a feasible solution satisfying y = 0. Jirka Fink Optimization methods 61

79 1 We multiply every equation with negative right hand side by 1. Jirka Fink Optimization methods 61

80 Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 62

81 Duality of linear programming: Example Find an upper bound for the following problem Maximize 2x 1 + 3x 2 subject to 4x 1 + 8x x 1 + x 2 3 3x 1 + 2x 2 4 x 1, x 2 0 Simple estimates 2x 1 + 3x 2 4x 1 + 8x x 1 + 3x 2 1 (4x x 2 ) 6 2 2x 1 + 3x 2 = 1 (4x x 2 + 2x 1 + x 2 ) 5 3 What is the best combination of conditions? Every non-negative linear combination of inequalities which gives an inequality d 1 x 1 + d 2 x 2 h with d 1 2 and d 2 3 provides the upper bound 2x 1 + 3x 2 d 1 x 1 + d 2 x 2 h. Jirka Fink Optimization methods 63

82 1 The first condition 2 A half of the first condition 3 A third of the sum of the first and the second conditions Jirka Fink Optimization methods 63

83 Duality of linear programming: Example Consider a non-negative combination y of inequalities Observations Maximize 2x 1 + 3x 2 subject to 4x 1 + 8x 2 12 / y 1 2x 1 + x 2 3 / y 2 3x 1 + 2x 2 4 / y 3 x 1, x 2 0 Every feasible solution x and non-negative combination y satisfies (4y 1 + 2y 2 + 3y 3 )x 1 + (8y 1 + y 2 + 2y 3 )x 2 12y 1 + 3y 2 + 4y 3. If 4y 1 + 2y 2 + 3y 3 2 and 8y 1 + y 2 + 2y 3 3, then 12y 1 + 2y 2 + 4y 3 is an upper for the objective function. Dual program 1 Minimize 12y 1 + 2y 2 + 4y 3 subject to 4y 1 + 2y 2 + 3y 3 2 8y 1 + y 2 + 2y 3 3 y 1, y 2, y 3 0 Jirka Fink Optimization methods 64

84 1 The primal optimal solution is x T = ( 1 2, 5 4 ) and the dual solution is y T = ( 5 16, 0, 1 4 ), both with the same objective value Jirka Fink Optimization methods 64

85 Duality of linear programming: General Primal linear program Maximize c T x subject to Ax b and x 0 Dual linear program Minimize b T y subject to A T y c and y 0 Weak duality theorem For every primal feasible solution x and dual feasible solution y hold c T x b T y. Corollary If one program is unbounded, then the other one is infeasible. Duality theorem Exactly one of the following possibilities occurs 1 Neither primal nor dual has a feasible solution 2 Primal is unbounded and dual is infeasible 3 Primal is infeasible and dual is unbounded 4 There are feasible solutions x and y such that c T x = b T y Jirka Fink Optimization methods 65

86 Dualization Every linear programming problem has its dual, e.g. Maximize c T x subject to Ax b and x 0 Primal program Maximize c T x subject to Ax b and x 0 Equivalent formulation Minimize b T y subject to A T y c and y 0 Dual program Minimize b T y subject to A T y c and y 0 Simplified formulation A dual of a dual problem is the (original) primal problem Minimize b T y subject to A T y c and y 0 Dual program -Maximize b T y subject to A T y c and y 0 Equivalent formulation -Minimize c T x subject to Ax b and x 0 Dual of the dual program -Minimize c T x subject to Ax b and x 0 Simplified formulation Maximize c T x subject to Ax b and x 0 The original primal program Jirka Fink Optimization methods 66

87 Dualization: General rules Primal linear program Dual linear program Variables x 1,..., x n y 1,..., y m Matrix A A T Right-hand side b c Objective function max c T x min b T y Constraints i-the constraint has y i 0 i-the constraint has y i 0 i-the constraint has = y i R x j 0 j-th constraint has x j 0 j-th constraint has x j R j-th constraint has = Jirka Fink Optimization methods 67

88 Linear programming: Feasibility versus optimality Feasibility versus optimality Finding a feasible solution of a linear program is computationally as difficult as finding an optimal solution. Using duality The optimal solutions of linear programs Primal: Maximize c T x subject to Ax b and x 0 Dual: Minimize b T y subject to A T y c and y 0 are exactly feasible solutions satisfying Ax b A T y c c T x b T y x, y 0 Jirka Fink Optimization methods 68

89 Complementary slackness Theorem Feasible solutions x and y of linear programs Primal: Maximize c T x subject to Ax b and x 0 Dual: Minimize b T y subject to A T y c and y 0 are optimal if and only if x i = 0 or A T i, y = c i for every i = 1,..., n and y j = 0 or A j, x = b j for every j = 1,..., m. Proof c T x = n c i x i i=1 n (y T A,i )x i = y T Ax = i=1 m y j (A j, x) j=1 m y j b j = b T y j=1 Jirka Fink Optimization methods 69

90 Proof of duality using simplex method with Bland s rule Notation Primal: Maximize c T x subject to Ax b and x 0 Primal with slack variables: Maximize c T x subject to Ā x = b and x 0 1 Dual: Minimize b T y subject to A T y c and y 0 Simplex tableau x B = p + Q x N z = z 0 + r T x N Simplex tableau is unique for every basis B Q = Ā 1 B ĀN p = Ā 1 B b z 0 = c T BĀ 1b Lemma B r = c N ( c T BĀ 1 B ĀN) T If B is a basis with an optimal solution x of the primal problem, then y = ( an optimal solution of the dual problem and c T x = b T y. 2 c TĀ 1 B )T is Jirka Fink Optimization methods 70

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Introduction to Linear and Combinatorial Optimization (ADM I)

Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

More information

Linear and Integer Optimization (V3C1/F4C1)

Linear and Integer Optimization (V3C1/F4C1) Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

III. Linear Programming

III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

Lecture 6 Simplex method for linear programming

Lecture 6 Simplex method for linear programming Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality. CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Week 3 Linear programming duality

Week 3 Linear programming duality Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47 Linear Programming Jie Wang University of Massachusetts Lowell Department of Computer Science J. Wang (UMass Lowell) Linear Programming 1 / 47 Linear function: f (x 1, x 2,..., x n ) = n a i x i, i=1 where

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

6. Linear Programming

6. Linear Programming Linear Programming 6-1 6. Linear Programming Linear Programming LP reduction Duality Max-flow min-cut, Zero-sum game Integer Programming and LP relaxation Maximum Bipartite Matching, Minimum weight vertex

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Lift-and-Project Inequalities

Lift-and-Project Inequalities Lift-and-Project Inequalities Q. Louveaux Abstract The lift-and-project technique is a systematic way to generate valid inequalities for a mixed binary program. The technique is interesting both on the

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming Gary D. Knott Civilized Software Inc. 1219 Heritage Park Circle Silver Spring MD 296 phone:31-962-3711 email:knott@civilized.com URL:www.civilized.com May 1, 213.1 Duality

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Homework Assignment 4 Solutions

Homework Assignment 4 Solutions MTAT.03.86: Advanced Methods in Algorithms Homework Assignment 4 Solutions University of Tartu 1 Probabilistic algorithm Let S = {x 1, x,, x n } be a set of binary variables of size n 1, x i {0, 1}. Consider

More information

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Gérard Cornuéjols 1 and Yanjun Li 2 1 Tepper School of Business, Carnegie Mellon

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 9 Tuesday, 4/20/10. Linear Programming UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1 Overview Motivation & Basics Standard & Slack Forms Formulating

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that Lecture 4 Avis-Kaluzny and the Simplex Method Last time, we discussed some applications of Linear Programming, such as Max-Flow, Matching, and Vertex-Cover. The broad range of applications to Linear Programming

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Resource Constrained Project Scheduling Linear and Integer Programming (1) DM204, 2010 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Resource Constrained Project Linear and Integer Programming (1) Marco Chiarandini Department of Mathematics & Computer Science University of Southern

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

CS Algorithms and Complexity

CS Algorithms and Complexity CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502) Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 1 Homework Problems Exercise

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

More information

MATHEMATICAL PROGRAMMING I

MATHEMATICAL PROGRAMMING I MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

A combinatorial algorithm minimizing submodular functions in strongly polynomial time

A combinatorial algorithm minimizing submodular functions in strongly polynomial time A combinatorial algorithm minimizing submodular functions in strongly polynomial time Alexander Schrijver 1 Abstract We give a strongly polynomial-time algorithm minimizing a submodular function f given

More information

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

CSC Design and Analysis of Algorithms. LP Shader Electronics Example CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information