Optimization methods NOPT048

Similar documents
Optimization methods NOPT048

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics

CO 250 Final Exam Guide

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

1 Review Session. 1.1 Lecture 2

Introduction to Linear and Combinatorial Optimization (ADM I)

Linear and Integer Optimization (V3C1/F4C1)

BBM402-Lecture 20: LP Duality

The Simplex Algorithm

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming. Chapter Introduction

Lectures 6, 7 and part of 8

Duality of LPs and Applications

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

3 The Simplex Method. 3.1 Basic Solutions

Integer Programming, Part 1

Linear Programming and the Simplex method

Chapter 1. Preliminaries

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

A Review of Linear Programming

III. Linear Programming

3.3 Easy ILP problems and totally unimodular matrices

Lecture 6 Simplex method for linear programming

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Chapter 1 Linear Programming. Paragraph 5 Duality

3.7 Cutting plane methods

Summary of the simplex method

15-780: LinearProgramming

Multicommodity Flows and Column Generation

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Summary of the simplex method

IE 5531: Engineering Optimization I

Linear and Integer Programming - ideas

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Lecture #21. c T x Ax b. maximize subject to

Week 3 Linear programming duality

MAT-INF4110/MAT-INF9110 Mathematical optimization

Lecture 2: The Simplex method

Linear Programming Inverse Projection Theory Chapter 3

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47

Polynomiality of Linear Programming

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

6. Linear Programming

Chapter 1: Linear Programming

Lift-and-Project Inequalities

The Dual Simplex Algorithm

7. Lecture notes on the ellipsoid algorithm

Part 1. The Review of Linear Programming

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Review Solutions, Exam 2, Operations Research

Duality in Linear Programming

3. THE SIMPLEX ALGORITHM

Homework Assignment 4 Solutions

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Discrete Optimization

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture notes on the ellipsoid algorithm

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Lecture 9 Tuesday, 4/20/10. Linear Programming

Appendix A Taylor Approximations and Definite Matrices

3.10 Lagrangian relaxation

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Math 341: Convex Geometry. Xi Chen

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Resource Constrained Project Scheduling Linear and Integer Programming (1)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

CS Algorithms and Complexity

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

AM 121: Intro to Optimization

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

AM 121: Intro to Optimization Models and Methods

MATHEMATICAL PROGRAMMING I

Understanding the Simplex algorithm. Standard Optimization Problems.

Linear Programming Duality

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

A combinatorial algorithm minimizing submodular functions in strongly polynomial time

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

4. Duality and Sensitivity

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Linear Programming: Simplex

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Math Models of OR: Some Definitions

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Transcription:

Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague Summer semester 2016/17 Last change on May 11, 2017 License: Creative Commons BY-NC-SA 4.0 Jirka Fink Optimization methods 1

Jirka Fink: Optimization methods Plan of the lecture Linear and integer optimization Convex sets and Minkowski-Weyl theorem Simplex methods Duality of linear programming Ellipsoid method Unimodularity Minimal weight maximal matching Matroid Cut and bound method Jirka Fink Optimization methods 2

Jirka Fink: Optimization methods General information E-mail fink@ktiml.mff.cuni.cz Homepage https://ktiml.mff.cuni.cz/ fink/ Consultations Individual schedule Examination Tutorial conditions Tests Theoretical homeworks Practical homeworks Pass the exam Jirka Fink Optimization methods 3

Jirka Fink: Optimization methods Literature A. Schrijver: Theory of linear and integer programming, John Wiley, 1986 W. J.Cook, W. H. Cunningham, W. R. Pulleyblank, A. Schrijver: Combinatorial Optimization, John Wiley, 1997 J. Matoušek, B. Gärtner: Understanding and using linear programming, Springer, 2006. J. Matoušek: Introduction to Discrete Geometry. ITI Series 2003-150, MFF UK, 2003 Jirka Fink Optimization methods 4

Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 5

Example of linear programming: Optimized diet Express using linear programming the following problem Find the cheapest vegetable salad from carrots, white cabbage and cucumbers containing required amount the vitamins A and C and dietary fiber. Food Carrot White cabbage Cucumber Required per meal Vitamin A [mg/kg] 35 0.5 0.5 0.5 mg Vitamin C [mg/kg] 60 300 10 15 mg Dietary fiber [g/kg] 30 20 10 4 g Price [EUR/kg] 0.75 0.5 0.15 Formulation using linear programming Carrot White cabbage Cucumber Minimize 0.75x 1 + 0.5x 2 + 0.15x 3 Cost subject to 35x 1 + 0.5x 2 + 0.5x 3 0.5 Vitamin A 60x 1 + 300x 2 + 10x 3 15 Vitamin C 30x 1 + 20x 2 + 10x 3 4 Dietary fiber x 1, x 2, x 3 0 Jirka Fink Optimization methods 6

Matrix notation of Linear programming problem Formulation using linear programming Minimize 0.75x 1 + 0.5x 2 + 0.15x 3 subject to 35x 1 + 0.5x 2 + 0.5x 3 0.5 60x 1 + 300x 2 + 10x 3 15 30x 1 + 20x 2 + 10x 3 4 x 1, x 2, x 3 0 Matrix notation Minimize Subject to a x 1, x 2, x 3 0 15 10 3 T x 1 x 2 x 3 35 0.5 0.5 x 1 0.5 60 300 10 x 2 15 30 20 10 x 3 4 Jirka Fink Optimization methods 7

Notation: Vector and matrix Matrix A matrix of type m n is a rectangular array of m rows and n columns of real numbers. Matrices are written as A, B, C, etc. Vector A vector is an n-tuple of real numbers. Vectors are written as c, x, y, etc. Usually, vectors are column matrices of type n 1. Scalar A scalar is a real number. Scalars are written as a, b, c, etc. Special vectors 0 and 1 are vectors of zeros and ones, respectively. Transpose The transpose of a matrix A is matrix A T created by reflecting A over its main diagonal. The transpose of a column vector x is the row vector x T. Jirka Fink Optimization methods 8

Notation: Matrix product Elements of a vector and a matrix The i-th element of a vector x is denoted by x i. The (i, j)-th element of a matrix A is denoted by A i,j. The i-th row of a matrix A is denoted by A i,. The j-th column of a matrix A is denoted by A,j. Dot product of vectors The dot product (also called inner product or scalar product) of vectors x, y R n is the scalar x T y = n i=1 x i y i. Product of a matrix and a vector The product Ax of a matrix A R m n of type m n and a vector x R n is a vector y R m such that y i = A i, x for all i = 1,..., m. Product of two matrices The product AB of a matrix A R m n and a matrix B R n k a matrix C R m k such that C,j = AB,j for all j = 1,..., k. Jirka Fink Optimization methods 9

Notation: System of linear equations and inequalities Equality and inequality of two vectors For vectors x, y R n we denote x = y if x i = y i for every i = 1,..., n and x y if x i y i for every i = 1,..., n. System of linear equations Given a matrix A R m n of type m n and a vector b R m, the formula Ax = b means a system of m linear equations where x is a vector of n real variables. System of linear inequalities Given a matrix A R m n of type and a vector b R m, the formula Ax b means a system of m linear inequalities where x is a vector of n real variables. Example: System of linear inequalities in two different notations ( ) 2x 1 + x 2 + x 3 14 2 1 1 1 ( ) x x 2x 1 + 5x 2 + 5x 3 30 2 5 5 2 14 30 x 3 Jirka Fink Optimization methods 10

Optimization Mathematical optimization is the selection of a best element (with regard to some criteria) from some set of available alternatives. Examples Minimize x 2 + y 2 where (x, y) R 2 Maximal matching in a graph Minimal spanning tree Shortest path between given two vertices Optimization problem Given a set of solutions M and an objective function f : M R, optimization problem is finding a solution x M with the maximal (or minimal) objective value f (x) among all solutions of M. Duality between minimization and maximization If min x M f (x) exists, then also max x M f (x) exists and min x M f (x) = max x M f (x). Jirka Fink Optimization methods 11

Linear Programming Linear programming problem A linear program is the problem of maximizing (or minimizing) a given linear function over the set of all vectors that satisfy a given system of linear equations and inequalities. Equation form: min c T x subject to Ax = b, x 0 Canonical form: max c T x subject to Ax b, where c R n, b R m, A R m n a x R n. Conversion from the equation form to the canonical form max c T x subject to Ax b, Ax b, x 0 Conversion from the canonical form to the equation form min c T x + c T x subject to Ax Ax + Ix = b, x, x, x 0 Jirka Fink Optimization methods 12

Terminology Basic terminology Number of variables: n Number of constrains: m Solution: an arbritrary vector x of R n Objective function: e.g. max c T x Feasible solution: a solution satisfying all constrains, e.g. Ax b Optimal solution: a feasible solution maximizing c T x Infeasible problem: a problem having no feasible solution Unbounded problem: a problem having a feasible solution with arbitrary large value of given objective function Polyhedron: a set of points x R n satisfying Ax b for some A R m n and b R m Polytope: a bounded polyhedron Jirka Fink Optimization methods 13

Example of linear programming: Network flow Network flow problem Given a direct graph (V, E) with capacities c R E and a source s V and a sink t V, find the maximal flow from s to t satisfying the flow conservation and capacity constrains. Formulation using linear programming Variables: flow f e for every edge e E Capacity constrains: 0 f c Flow conservation: uv E f uv = vw E f vw for every v V \ {s, t} Objective function: Maximize sw E f sw us E f us Matrix notation Add an auxiliary edge x ts with a sufficiently large capacity c ts Objective function: max x ts Flow conservation: Ax = 0 where A is the incidence matrix Capacity constrains: x c and x 0 Jirka Fink Optimization methods 14

Graphical method: Set of feasible solutions Example Draw the set of all feasible solutions (x 1, x 2 ) satisfying the following conditions. x 1 + 6x 2 15 4x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 Solution x 1 0 x 2 x 1 1 x 1 + 6x 2 15 x 2 0 (0, 0) 4x 1 x 2 10 Jirka Fink Optimization methods 15

Graphical method: Optimal solution Example Find the optimal solution of the following problem. Solution Maximize x 1 + x 2 x 1 + 6x 2 15 4x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 c T x = 2 c T x = 5 c T x = 0 c T x = 1 (3, 2) (1, 1) (0, 0) Jirka Fink Optimization methods 16

Graphical method: Multiple optimal solutions Example Find all optimal solutions of the following problem. Maximize 1 6 1 + x 2 x 1 + 6x 2 15 4x 1 x 2 10 x 1 + x 2 1 x 1, x 2 0 Solution c T x = 10 3 ( 1 6, 1) (0, 0) Jirka Fink Optimization methods 17

Graphical method: Unbounded problem Example Show that the following problem is unbounded. Maximize x 1 + x 2 x 1 + x 2 1 x 1, x 2 0 Solution (1, 1) (0, 0) Jirka Fink Optimization methods 18

Graphical method: Infeasible problem Example Show that the following problem has no feasible solution. Maximize x 1 + x 2 x 1 + x 2 2 x 1, x 2 0 Solution x 1 0 x 2 0 (0, 0) x 1 + x 2 2 Jirka Fink Optimization methods 19

Related problems Integer linear programming Integer linear programming problem is an optimization problem to find x Z n which maximizes c T x and satisfies Ax b where A R m n and b R m. Mix integer linear programming Some variables are integer and others are real. Binary linear programming Every variable is either 0 or 1. Complexity A linear programming problem is efficiently solvable, both in theory and in practice. The classical algorithm for linear programming is the Simplex method which is fast in practice but it is not known whether it always run in polynomial time. Polynomial time algorithms the ellipsoid and the interior point methods. No strongly polynomial-time algorithms for linear programming is known. Integer linear programming is NP-hard. Jirka Fink Optimization methods 20

Example of integer linear programming: Vertex cover Vertex cover problem Given an undirected graph (V, E), find the smallest set of vertices U V covering every edge of E; that is, U e for every e E. Formulation using integer linear programming Variables: cover x v {0, 1} for every vertex v V Covering: x u + x v 1 for every edge uv E Objective function: Minimize v V x v Matrix notation Variables: cover x {0, 1} V (i.e. 0 x 1 and x Z V ) Covering: A T x 1 where A is the incidence matrix Objective function: Minimize 1 T x Jirka Fink Optimization methods 21

Relation between optimal integer and relaxed solution Non-empty polyhedron may not contain an integer solution Integer feasible solution may not be obtained by rounding of a relaxed solution Integral optimum c Relaxed optimum Jirka Fink Optimization methods 22

Example: Ice cream production planning Problem description An ice cream manufacturer needs to plan production of ice cream for next year The estimated demand of ice cream for month i {1,..., n} is d i (in tons) Storage facilities for 1 ton of ice cream cost a per month Changing the production by 1 ton from month i 1 to month i cost b Produced ice cream cannot be stored longer than one month The total cost has to be minimized Jirka Fink Optimization methods 23

Example: Ice cream production planning Solution Variable x i determines the amount of produced ice cream in month i {0,..., n} Variable s i determines the amount of stored ice cream from month i 1 month i The stored quantity is computed by s i = s i 1 + x i d i for every i {1,..., n} Durability is ensured by s i d i for all i {1,..., n} Non-negativity of the production and the storage x, s 0 Objective function min b n i=1 x i x i 1 + a n i=1 s i is non-linear Let y i 0 and z i 0 be the increment and the decrement of production, reps., and x i x i 1 = y i z i Linear programming problem formulation Minimize b n i=1 (y i + z i ) + a n i=1 s i subject to s i 1 s i + x i = d i for i {1,..., n} s i d i for i {1,..., n} x, s, y, z 0 We can bound the initial and final amount of ice cream s 0 a s n and also bound the production x 0 Jirka Fink Optimization methods 24

Finding shortest paths from a vertex s in an oriented graph Linear programming problem Maximize u V x u subject to x v x u c uv for every edge uv x s = 0 Proof (the optimal solution x u gives the distance from s to u u V ) 1 Let y u be the length of the shortest path from s to u 2 It holds that y x Let P be edges on the shortest path from s to z y z = uv P c uv uv x v x u = x z y s = x z 3 It holds that y = x For the sake of contradiction assume that y x So y x and u V y u > u V x u But y is a feasible solution and x is an optimal solution Jirka Fink Optimization methods 25

Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 26

Linear space Definition: Linear (vector) space A set (V, +, ) is called a linear (vector) space over a field T if + : V V V i.e. V is closed under addition + : T V V i.e. V is closed under multiplication by T (V, +) is an Abelian group For every x V it holds that 1 x = x where 1 T For every a, b T and every x V it holds that (ab) x = a (b x) For every a, b T and every x V it holds that (a + b) x = a x + b x For every a T and every x, y V it holds that a (x + y) = a x + a y Observation If V is a linear space and L V, then L is a linear space if and only if 0 L, x + y L for every x, y L and αx L for every x L and α T. Jirka Fink Optimization methods 27

Linear and affine spaces in R n Observation A non-empty set V R n is a linear space if and only if αx + βy V for all α, β R, x, y V. Definition If V R n is a linear space and a R n is a vector, then V + a is called an affine space where V + a = {x + a; x V }. Basic observations If L R n is an affine space, then L + x is an affine space for every x R n. If L R n is an affine space, then L x is a linear space for every x L. 1 If L R n is an affine space, then L x = L y for every x, y L. 2 An affine space L R n is linear if and only if L contains the origin 0. 3 System of linear equations The set of all solutions of Ax = 0 is a linear space and every linear space is the set of all solutions of Ax = 0 for some A. 4 The set of all solutions of Ax = b is an affine space and every affine space is the set of all solutions of Ax = b for some A and b, assuming Ax = b is consistent. 5 Jirka Fink Optimization methods 28

1 By definition, L = V + a for some linear space V and some vector a R n. Observe that L x = V + (a x) and we prove that V + (a x) = V which implies that L x is a linear space. There exists y V such that x = y + a. Hence, a x = a y a = y V. Since V is closed under addition, it follows that V + (a x) V. Similarly, V (a x) V which implies that V V + (a x). Hence, V = V + (a x) and the statement follows. 2 We proved that L = V + a for some linear space V R n and some vector a R n and L x = V + (a x) = V for every x L. So, L x = V = L y. 3 Every linear space must contain the origin by definition. For the opposite implication, we set x = 0 and apply the previous statement. 4 If V is a linear space, then we can obtain rows of A from the basis of the orthogonal space of V. 5 If L is an affine space, then L = V + a for some vector space V and some vector a and there exists a matrix A such that V = {x; Ax = 0}. Hence, V + a = {x + a; Ax = 0} = {y; Ay Aa = 0} = {y; Ay = b} where we substitute x + a = y and set b = Aa. If L = {x; Ax = b} is non-empty, then let y be an arbitrary vertex of L. Furthermore, L y = {x y; Ax = b} = {z; Ay + Az = b} = {z; Az = 0} is a linear space since Ay = b. Jirka Fink Optimization methods 28

Convex set Observation A set S R n is an affine space if and only if S contains whole line given every two points of S. Definition A set S R n is convex if S contains whole segment between every two points of S. Example a u b v Jirka Fink Optimization methods 29

Linear, affine and convex hulls Observation The intersection of linear spaces is also a linear space. 1 The non-empty intersection of affine spaces is an affine space. 2 The intersection of convex sets is also a convex set. 3 Definition Let S R n be an non-empty set. The linear hull span(s) of S is the intersection of all linear sets containing S. The affine hull aff(s) of S is the intersection of all affine sets containing S. The convex hull conv(s) of S is the intersection of all convex sets containing S. Observation Let S R n be an non-empty set. A set S is linear if and only if S = span(s). 4 A set S is affine if and only if S = aff(s). 5 A set S is convex if and only if S = conv(s). 6 span(s) = aff(s {0}) Jirka Fink Optimization methods 30

1 Use definition and logic. 2 Let L i be affine space for i in an index set I and L = i I L i and a L. We proved that L a = i I (L i a) is a linear space which implies that L is an affine space. 3 Use definition and logic. 4 Similar as the convex version. 5 Similar as the convex version. 6 We proved that conv(s) is convex, so if S = conv(s), then S is convex. In order to prove that S = conv(s) if S is convex, we observe that conv(s) S since conv(s) = M S,M convex and S is included in this intersection. Similarly, conv(s) S since every M in the intersection contains S. Jirka Fink Optimization methods 30

Linear, affine and convex combinations Definition Let v 1,..., v k be vectors of R n where k is a positive integer. The sum k i=1 α iv i is called a linear combination if α 1,..., α k R. The sum k i=1 α iv i is called an affine combination if α 1,..., α k R k i=1 α i = 1. The sum k i=1 α iv i is called a convex combination if α 1,..., α k 0 and k i=1 α i = 1. Lemma Let S R n be a non-empty set. The set of all linear combinations of S is a linear space. 1 The set of all affine combinations of S is an affine space. 2 The set of all convex combinations of S is a convex set. 3 Lemma A linear space S contains all linear combinations of S. 4 An affine space S contains all affine combinations of S. 5 A convex set S contains all convex combinations of S. 6 Jirka Fink Optimization methods 31

1 We have to verify that the set of all linear combinations has closure under addition and multiplication by scalars. In order to verify the closure under multiplication, let k i=1 α iv i be a linear combination of S and c R be a scalar. Then, c k i=1 α iv i = k i=1 (cα i)v i is a linear combination of of S. Similarly, the set of all linear combinations has closure under addition and it contains the origin. 2 Similar as the convex version: Show that S contains whole line defined by arbitrary pair of points of S. 3 Let k i=1 α iu i and l j=1 β jv j be two convex combinations of S. In order to prove that the set of all convex combinations of S contains the line segment between k i=1 α iu i and l j=1 β jv j, let us consider γ 1, γ 2 0 such that γ 1 + γ 2 = 1. Then, γ k 1 i=1 α iu i + γ l 2 j=1 β jv j = k i=1 (γ 1α i )u i + l j=1 (γ 2β j )v j is a convex combination of S since (γ 1 α i ), (γ 2 β j ) 0 and k i=1 (γ 1α i ) + l j=1 (γ 2β j ) = 1. 4 Similar as the convex version. 5 Let k i=1 α iv i be an affine combination of S. Since S v k is a linear space, the linear combination k i=1 α i(v i v k ) of S v k belongs into S v k. Hence, v k + k i=1 α i(v i v k ) = k i=1 α iv i belongs to S. 6 We prove by induction on k that S contains every convex combination k i=1 α iv i of S. The statement holds for k 2 by the definition of a convex set. Let k i=1 α iv i be a convex combination of k vectors of S and we assume that α k < 1, otherwise α 1 = = α k 1 = 0 so k i=1 α iv i = v k S. Hence, k i=1 α iv i = (1 α k ) k i=1 α i 1 α k v i + α k v k = (1 α k )y + α k v k where we observe Jirka Fink Optimization methods 31

that y := k i=1 α i 1 α k v i is a convex combination of k 1 vectors of S which by induction belongs to S. Furthermore, (1 α k )y + α k v k is a convex combination of S which by induction also belongs to S. Jirka Fink Optimization methods 31

Linear, affine and convex combinations Theorem Let S R n be a non-empty set. The linear hull of a set S is the set of all linear combinations of S. 1 The affine hull of a set S is the set of all affine combinations of S. 2 The convex hull of a set S is the set of all convex combinations of S. 3 Jirka Fink Optimization methods 32

1 Similar as the convex version. 2 Similar as the convex version. 3 Let T be the set of all convex combinations of S. First, we prove that conv(s) T. The definition states that conv(s) = M S,M convex M and we proved that T is a convex set containing S, so T is included in this intersection which implies that conv(s) is a subset of T. In order to prove conv(s) T, we again consider the intersection conv(s) = M S,M convex M. We proved that a convex set M contains all convex combinations of M which implies that if M S then M also contains all convex combinations of S. So, in this intersection every M contains T which implies that conv(s) T. Jirka Fink Optimization methods 32

Independence and base Definition A set of vectors S R n is linearly independent if no vector of S is a linear combination of other vectors of S. A set of vectors S R n is affinely independent if no vector of S is an affine combination of other vectors of S. Observation (Homework) Vectors v 1,..., v k R n are linearly dependent if and only if there exists a non-trivial combination α 1,..., α k R such that k i=1 α iv i = 0. Vectors v 1,..., v k R n are affinely dependent if and only if there exists a non-trivial combination α 1,..., α k R such that k i=1 α iv i = 0 a k i=1 α i = 0. Observation Vectors v 0,..., v k R n are affinely independent if and only if vectors v 1 v 0,..., v k v 0 are linearly independent. 1 Vectors v 1,..., v k R n are linearly independent if and only if vectors 0, v 1,..., v k are affinely independent. 2 Jirka Fink Optimization methods 33

1 If vectors v 1 v 0,..., v k v 0 are linearly dependent, then there exists a non-trivial combination α 1,..., α k R such that k i=1 α i(v i v 0 ) = 0. In this case, 0 = k i=1 α i(v i v 0 ) = k i=1 α iv i v k 0 i=1 α i = k i=0 α iv i is a non-trivial affine combination with k i=0 α i = 0 where α 0 = k i=1 α i. f v 0,..., v k R n are affinely dependent, then there exists a non-trivial combination α 0,..., α k R such that k i=0 α iv i = 0 a k i=0 α i = 0. In this case, 0 = k i=0 α iv i = α 0 v 0 + k i=1 α iv i = k i=1 α i(v i v 0 ) is a non-trivial linear combination of vectors v 1 v 0,..., v k v 0. 2 Use the previous observation with v 0 = 0. Jirka Fink Optimization methods 33

Basis Definition Let B R n and S R n. B is a base of a linear space S if B are linearly independent and span(b) = S. B is an base of an affine space S if B are affinely independent and aff(b) = S. Observation All linear bases of a linear space have the same cardinality. All affine bases of an affine space have the same cardinality. 1 Observation Let S be a linear space and B S \ {0}. Then, B is a linear base of S if and only if B {0} is an affine base of S. Definition The dimension of a linear space is the cardinality of its linear base. The dimension of an affine space is the cardinality of its affine base minus one. The dimension dim(s) of a set S R n is the dimension of affine hull of S. Jirka Fink Optimization methods 34

1 For the sake of contradiction, let a 1,..., a k and b 1,..., b l be two basis of an affine space L = V + x where V a linear space and l > k. Then, a 1 x,..., a k x and b 1 x,..., b l x are two linearly independent sets of vectors of V. Hence, there exists i such that a 1 x,..., a k x, b i x are linearly independent, so a 1,..., a k, b i are affinely independent. Therefore, b i cannot be obtained by an affine combination of a 1,..., a k and b i / aff(a 1,..., a k ) which contradicts the assumption that a 1,..., a k is a basis of L. Jirka Fink Optimization methods 34

Carathéodory Theorem (Carathéodory) Let S R n. Every point of conv(s) is a convex combinations of affinely independent points of S. 1 Corollary Let S R n be a set of dimension d. Then, every point of conv(s) is a convex combinations of at most d + 1 points of S. Jirka Fink Optimization methods 35

1 Let x conv(s). Let x = k i=1 α ix i be a convex combination of points of S with the smallest k. If x 1,..., x k are affinely dependent, then there exists a combination 0 = β i x i such that β i = 0 and β 0. Since this combination is non-trivial, there exists j such that β j > 0 and α j β j x = i j γ i x i i j γ i = 1 γ i 0 for all i j which contradicts the minimality of k. is minimal. Let γ i = α i α j β i. Observe that β j Jirka Fink Optimization methods 35

Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 36

Notation Notation used in the Simplex method Equation form: Maximize c T x such that Ax = b and x 0 where A R m n and b R m. We assume that rows of A are linearly independent. For a subset B {1,..., n}, let A B be the matrix consisting of columns of A whose indices belong to B. Similarly for vectors, x B denotes the coordinates of x whose indices belong to B. The set N = {1,..., n} \ B denotes the remaining columns. Example Consider B = {2, 4}. Then, N = {1, 3, 5} and ( ) ( ) 1 3 5 6 0 3 6 A = A 2 4 8 9 7 B = 4 9 A N = ( 1 5 ) 0 2 8 7 Note that Ax = A B x B + A N x N. x T = (3, 4, 6, 2, 7) x T B = (4, 2) x T N = (3, 6, 7) Jirka Fink Optimization methods 37

Geometrical interpretation of basis solutions 1 For a system Ax = b with n variables and n linearly independent conditions, there exists the inverse matrix A 1 and the only feasible solution of Ax = b is x = A 1 b. 2 Consider a system Ax b with n = rank(a) variables and m n conditions and select n linearly independent rows A x b. Then, the system A x = b has a solution x = A 1 b. Moreover, if Ax b, then x is a vertex of the polyhedron Ax b. 1 3 Consider the equation form Ax = b and x 0 and let N be n m rows of x 0. If rows of the system Ax = 0 and x N = 0 are linearly independent, then b = Ax = A B x B + A N x N = A B x B, so x = (x B, x N) = (A 1 B b, 0) where B = {1,..., n} \ N. Moreover, if x B 0, then x is a vertex of Ax = b and x 0. 4 Consider the equation form again. If we choose m linearly independent columns B of A, then conditions Ax = b and x N = 0 are linearly independent. Jirka Fink Optimization methods 38

1 The solution x = A 1 b will be called a basis solution. Vertices of a polyhedron will be formally defined later, so we use a geometrical intuition now. Jirka Fink Optimization methods 38

Basic feasible solutions Definitions Consider the equation form Ax = b and x 0 with n variables and rank(a) = m rows. A set B {1,..., n} of linearly independent columns of A is called a basis. 1 The basic solution x corresponding to a basis B is x N = 0 and x B = A 1 B b. A basic solution satisfying x 0 is called a basic feasible solution. x B are called basis variables and x N are called non-basis variables. 2 Observation A feasible solution x of a Ax = b and x 0 is basis if and only if columns of A K are linearly independent where K = {j {1,..., n} ; x j > 0}. 3 4 Observation Linear program in the equation form has at most ( ) n m basis solutions. 5 Jirka Fink Optimization methods 39

1 Observe that B {1,..., n} is a basis if and only if A B is a regular matrix. 2 Remember that non-basis variables are always equal to zero. 3 If x is a basic feasible solution and B is the corresponding basis, then x N = 0 and so K B which implies that columns of A K are also linearly independent. If columns of A K are linearly independent, then we can extend K into B by adding columns of A so that columns of A B are linearly independent which implies that B is a basis of x. 4 Note that basis variables can also be zero. In this case, the basis B corresponding to a basis solution x may not be unique since there may be many ways to extend K into a basis B. This is called degeneracy. 5 There are ( n m) subsets B {1,..., n} and for some of these subsets AB may not be regular. Jirka Fink Optimization methods 39

Optimal basis feasible solutions Theorem If the linear program max c T x subject to Ax = b and x 0 has a feasible solution and the objective function is bounded from above of the set of all feasible solutions, then there exists an optimal solution. Moreover, if an optimal solution exists then there is a basis feasible solution which is optimal. 1 Lemma If the objective function of a linear program in the equation form is bounded above, then for every feasible solution x there exists a basis feasible solution x with the same or larger value of the objective function, i.e. c T x c T x. 2 Jirka Fink Optimization methods 40

1 If the problem is bounded, one may try to find the optimal solution by finding all basis feasible solutions. However, this is not an efficient algorithm since the number of basis grows exponentially. 2 Let x{ be a feasible solution with } c T x c T x and the smallest possible size of the set K = j {1,..., n} ; x j > 0. Let N = {1,..., n} \ K. If columns of A K are linearly independent, then x is a basis solution. There exists a non-zero vector v K such that A K v K = 0. Let v N = 0. WLOG: c T v 0 since we can replace v by v. Consider the line x(t) = x + tv for t R. For every t R: Ax(t) = b and (x(t)) N = 0. For every t 0: c T x(t) c T x. If c T v > 0 and v 0, then points x(t) are feasible for every t 0 and the objective function c T x(t) = c T x + tc T v converges to infinity which contradicts assuptions. x j If v j < 0 for some j K, then consider j K with v j < 0 and minimal. Let v j t = x j v j. Since x( t) 0 and (x( t)) j = 0, the solution x( t) is feasible with smaller number of positive components than x which is a contradiction. The remaining case is c T v = 0 and v j 0. Since v K is a non-trivial combination, there exists j K with v j > 0. Replace v by v and apply the previous case. Jirka Fink Optimization methods 40

Convex polyhedrons Definition A hyperplane is a set { x R n ; a T x = b } where a R n \ {0} and b R. A half-space is a set { x R n ; a T x b } where a R n \ {0} and b R. A polyhedron is an intersection of finitely many half-spaces. A polytope is a bounded polyhedron. Observation For every a R n and b R, the set of all x R n satisfying a T x b is convex. Corollary Every polyhedron Ax b is convex. Examples n-dimensional hypercube: {x R n ; 0 x 1} n-dimensional crosspolytope: { x R n ; n i=1 x i 1 } 1 n-dimensional simplex: { x R n+1 ; x 0, 1x = 1 } 2 Jirka Fink Optimization methods 41

1 Formally, n i=1 x i 1 is not a linear inequality. However, it can be replaced by 2 n linear inequalities dx 1 for all d { 1, 1} n. 2 n-dimensional simplex is a convex hull of n + 1 affinely independent points. Jirka Fink Optimization methods 41

Faces of a polyhedron Definition Let P be a polyhedron. A half-space α T x β is called a supporting hyperplane of P if the inequality α T x β holds for every x P and the hyperplane α T x = β has a non-empty intersection with P. Definition If α T x β is a supporting hyperplane of a polyhedron P, then P { x; α T x = β } is called a face of P. By convention, the empty set and P are also called faces, and the other faces are proper faces. 1 Definition Let P be a d-dimensional polyhedron. A 0-dimensional face of P is called a vertex of P. A 1-dimensional face is of P called an edge of P. A (d 1)-dimensional face of P is called an facet of P. Jirka Fink Optimization methods 42

1 Observe, that every face of a polyhedron is also a polyhedron. Jirka Fink Optimization methods 42

Vertices Observations The set of all optimal solutions of a linear program max c T x over a polyhedron P is a face of P. 1 Every proper face of P is a set of all optimal solutions of a linear program max cx over a polyhedron P for some c R n. 2 3 Vertices are unique solutions of linear programs max c T x over P for some c. Theorem Let P be the set of all solutions of a linear program in the equation form and v P. Then the following statements are equivalent. 1 v is a vertex of a polyhedron P. 2 v is a basis feasible solution of the linear program. 4 Theorem If the linear program max c T x subject to Ax = b and x 0 has a feasible solution and the objective function is bounded from above of the set of all feasible solutions, then there exists an optimal solution. Moreover, if an optimal solution exists then there is a basis feasible solution which is optimal. Jirka Fink Optimization methods 43

1 Let F be the set of all optimal solutions. If F = or F = P, then F is a face of P by definition. Otherwise, d = max { c T x; x P } exists. Since c T x = d is a supporting hyperplane of P and F = P { x; c T x = d }, it follows that F is a face of P. 2 A proper face F of P is defined as the intersection of P and a supporting hyperplane c T x = d, so F is the set of all optimal solutions of the linear program max c T x over P. 3 Note that P is also the set of all optimal solutions of a linear program for c = 0. On the other hand, if P is non-empty and bounded, then the empty set cannot be express as a set of all optimal solutions for any c. 4 Follows from the following theorem. Let B be the basis defining v and let c B = 0 and c N = 1. Then c T v = c T B v B + c T N v N = 0 and for every feasible x it holds holds that x 0, so c T x 0. Hence, v is a optimal solution of the linear program with the objective function max c T x. Furthermore, v is the only optimal solution since every optimal solution x must satisfy x N = 0. In this case, x B = A 1 b is unique. B Jirka Fink Optimization methods 43

Example: Initial simplex tableau Canonical form Equation form Simplex tableau Maximize x 1 + x 2 subject to x 1 + x 2 1 x 1 3 x 2 2 x 1, x 2 0 Maximize x 1 + x 2 subject to x 1 + x 2 + x 3 = 1 x 1 + x 4 = 3 x 2 + x 5 = 2 x 1, x 2, x 3, x 4, x 5 0 x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 Jirka Fink Optimization methods 44

Example: Initial simplex tableau Simplex tableau Initial basic feasible solution Pivot B = {3, 4, 5}, N = {1, 2} x = (0, 0, 1, 3, 2) Two edges from the vertex (0, 0, 1, 3, 2): x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 1 (t, 0, 1 + t, 3 t, 2) when x 1 is increased by t 2 (0, r, 1 r, 3, 2 r) when x 2 is increased by r These edges give feasible solutions for: 1 t 3 since x 3 = 1 + t 0 and x 4 = 3 t 0 and x 5 = 2 0 2 r 1 since x 3 = 1 r 0 and x 4 = 3 0 and x 5 = 2 r 0 In both cases, the objective function is increasing. We choose x 2 as a pivot. Jirka Fink Optimization methods 45

Example: Pivot step Simplex tableau Basis Original basis B = {3, 4, 5} x 2 enters the basis (by our choice). x 3 = 1 + x 1 x 2 x 4 = 3 x 1 x 5 = 2 x 2 z = x 1 + x 2 (0, r, 1 r, 3, 2 r) is feasible for r 1 since x 3 = 1 r 0. Therefore, x 3 leaves the basis. New base B = {2, 4, 5} New simplex tableau x 2 = 1 + x 1 x 3 x 4 = 3 x 1 x 5 = 1 x 1 + x 3 z = 1 + 2x 1 x 3 Jirka Fink Optimization methods 46

Example: Next step Simplex tableau Next pivot x 2 = 1 + x 1 x 3 x 4 = 3 x 1 x 5 = 1 x 1 + x 3 z = 1 + 2x 1 x 3 Basis B = {2, 4, 5} with a basis feasible solution (0, 1, 0, 3, 1). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (t, 1 + t, 0, 3 t, 1 t). Feasible solutions for x 2 = 1 + t 0 and x 4 = 3 t 0 and x 5 = 1 t 0. Therefore, x 1 enters the basis and x 5 leaves the basis. New simplex tableau x 1 = 1 + x 3 x 5 x 2 = 2 x 5 x 4 = 2 x 3 + x 5 z = 3 + x 3 2x 5 Jirka Fink Optimization methods 47

Example: Last step Simplex tableau Next pivot x 1 = 1 + x 3 x 5 x 2 = 2 x 5 x 4 = 2 x 3 + x 5 z = 3 + x 3 2x 5 Basis B = {1, 2, 4} with a basis feasible solution (1, 2, 0, 2, 0). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (1 + t, 2, t, 2 t, 0). Feasible solutions for x 1 = 1 + t 0 and x 2 = 2 0 and x 4 = 2 t 0. Therefore, x 3 enters the basis and x 4 leaves the basis. New simplex tableau x 1 = 3 x 4 x 2 = 2 x 5 x 3 = 2 x 4 + x 5 z = 5 x 4 x 5 Jirka Fink Optimization methods 48

Example: Optimal solution Simplex tableau x 1 = 3 x 4 x 2 = 2 x 5 x 3 = 2 x 4 + x 5 z = 5 x 4 x 5 No other pivot Basis B = {1, 2, 3} with a basis feasible solution (3, 2, 2, 0, 0). This vertex has two incident edges but no one increases the objective function. We have an optimal solution. Why this is an optimal solution? Consider an arbitrary feasible solution ỹy. The value of objective function is z = 5 ỹy 4 ỹy 5. Since ỹy 4, ỹy 5 0, the objective value is z = 5 ỹy 4 ỹy 5 5 = z. Jirka Fink Optimization methods 49

Example: Unboundedness Canonical form Maximize x 1 subject to x 1 x 2 1 x 1 + x 2 2 x 1, x 2 0 Equation form Maximize x 1 subject to x 1 x 2 + x 3 = 1 x 1 + x 2 + x 4 = 2 x 1, x 2, x 3, x 4 0 Initial simplex tableau x 3 = 1 x 1 + x 2 x 4 = 2 + x 1 x 2 z = x 1 Jirka Fink Optimization methods 50

Example: Unboundedness Simplex tableau x 3 = 1 x 1 + x 2 x 4 = 2 + x 1 x 2 z = x 1 First pivot Basis B = {3, 4} with a basis feasible solution (0, 0, 1, 2). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (t, 0, 1 t, 2 + t). Feasible solutions for x 3 = 1 t 0 and x 4 = 2 + t 0. Therefore, x 1 enters the basis and x 3 leaves the basis. Simplex tableau x 1 = 1 + x 2 x 3 x 4 = 3 x 3 z = 1 + x 2 x 3 Jirka Fink Optimization methods 51

Example: Unboundedness Simplex tableau x 1 = 1 + x 2 x 3 x 4 = 3 x 3 z = 1 + x 2 x 3 Unboundedness Basis B = {1, 4} with a basis feasible solution (1, 0, 0, 3). This vertex has two incident edges but only one increases the objective function. The edge with increasing objective function is (1 + t, t, 0, 3). Every point (1 + t, t, 0, 3) for t 0 is feasible. The value of the objective function is 1 + t. Therefore, this problem is unbounded. Jirka Fink Optimization methods 52

Example: Degeneracy Canonical form Maximize x 2 subject to x 1 + x 2 0 x 1 2 x 1, x 2 0 Convert to the equation form by adding slack variables x 3 and x 4. Same solution with different basis x 2 = x 1 x 3 x 4 = 2 x 1 z = x 1 x 3 Basis feasible solution (0, 0, 0, 2) with the basis {2, 4}. Initial simplex tableau x 3 = x 1 x 2 x 4 = 2 x 1 z = x 2 Basis feasible solution (0, 0, 0, 2) with the basis {3, 4}. Optimal simplex tableau x 1 = 2 x 4 x 2 = 2 x 3 x 4 z = 2 x 3 x 4 Optimal solution (2, 2, 0, 0) with the basis {1, 2}. Jirka Fink Optimization methods 53

Simplex tableau in general Definition A simplex tableau determined by a feasible basis B is a system of m + 1 linear equations in variables x 1,..., x n, and z that has the same set of solutions as the system Ax = b, z = c T x, and in matrix notation looks as follows: x B = p + Qx N z = z 0 + r T x N where x B is the vector of the basis variables, x N is the vector on non-basis variables, p R m, r R n m, Q is an m (n m) matrix, and z 0 R. Example x 3 = 5 + x 1 x 2 x 4 = 2 x 1 z = 3 + x 1 + 2x 2 ( ) 1 1 Q = 1 0 ( ) ( ) x x B = 3 x, x N = 1, p = x 4 z 0 = 3, r T = (1, 2) x 2 ( ) 5 2 Jirka Fink Optimization methods 54

Simplex tableau in general Definition A simplex tableau determined by a feasible basis B is a system of m + 1 linear equations in variables x 1,..., x n, and z that has the same set of solutions as the system Ax = b, z = c T x, and in matrix notation looks as follows: x B = p + Qx N z = z 0 + r T x N where x B is the vector of the basis variables, x N is the vector on non-basis variables, p R m, r R n m, Q is an m (n m) matrix, and z 0 R. Observation For each basis B there exists exactly one simplex tableau, and it is given by Q = A 1 B A N p = A 1 B b z 0 = c T BA 1 B b r = c N (c T BA 1 B A N) T 1 Jirka Fink Optimization methods 55

1 Since A B x B + A N x N = b and A B is a regular matrix, it follows that x B = A 1 B b A 1 B A Nx N where A 1 b = p and A 1 B B A N = Q. The objective function is c T B x B + c T N x N = c T B A 1 B b (ct B A 1 B A N + c T N )x N, where c T B A 1 B b = z 0 and c T B A 1 B A N + c T N = r T. Jirka Fink Optimization methods 55

Properties of a simplex tableau Simplex tableau in general Observation Basis B is feasible if and only if p 0. Observation x B = p + Qx N z = z 0 + r T x N If r 0, then the solution corresponding to a basis B is optimal. Idea of the pivot step Choose v N. Which is the last feasible point of the half-line x(t) for t 0 where x v(t) = t x N\{v} (t) = 0 x B (t) = p + Q,vt? Observation If there exists a non-basis variable x v such that r v > 0 and Q,v 0, then the problem is unbounded. Jirka Fink Optimization methods 56

Pivot step Simplex tableau in general x B = p + Qx N z = z 0 + r T x N Find a pivot If r 0, then we have an optimal solution. Otherwise, choose an arbitrary entering variable x v such that r v > 0. If Q,v 0, then the problem is also unbounded. Otherwise, find a leaving variable x u which limits the increment of the entering variable most strictly, i.e. Q u,v < 0 and p u Q u,v is minimal. Jirka Fink Optimization methods 57

Pivot rules Pivot rules Largest coefficient Choose an improving variable with the largest coefficient. Largest increase Choose an improving variable that leads to the largest absolute improvement in z, e.i. c T (x new x old ) is maximal. Steepest edge Choose an improving variable whose entering into the basis moves the current basic feasible solution in a direction closest to the direction of the vector c, i.e. c T (x new x old ) x new x old Bland s rule Choose an improving variable with the smallest index, and if there are several possibilities of the leaving variable, also take the one with the smallest index. Random edge Select the entering variable uniformly at random among all improving variables. Jirka Fink Optimization methods 58

Pivot step Simplex tableau in general Gaussian elimination x B = p + Qx N z = z 0 + r T x N New basis variables are (B \ {u}) {v} and new non-basis variables are (N \ {v}) {u} Row x u = p u + Q u,vx v + j N\{v} Q u,jx j is replaced by row x v = p u Q u,v + 1 Q u,v x u + Q u,j j N\{v} Q u,v x j. Rows x i = p i + Q i,v x v + j N\{v} Q i,jx j for i B \ {u} are replaced by rows x i = (p i + Q i,v Q u,v p u ) + Q i,v Q u,v x u + j N\{v} (Q i,j + Q u,j Q i,v Q u,v )x j. Objective function z = z 0 + r vx v + j N\{v} r j x j is replaced by objective function z = (z 0 + Observation p u Q u,v ) + r v Q u,v x u + j N\{v} (r j + r v Q i,v Q u,v )x j. Pivot step does not change the set of all feasible solutions. Jirka Fink Optimization methods 59

Bland s rule Simplex tableau in general x B = p + Qx N z = z 0 + r T x N Observation Let B is a basis with the corresponding solution x and let B a new basis with the corresponding solution x after a single pivot step. Then, x = x or c T x < c T x. 1 Observation If the simplex method loops endlessly, then basis occuring in the loop correspond to the same vertex. 2 Theorem The simplex method with Bland s pivot rule is always finite. 3 Jirka Fink Optimization methods 60

1 Consider the half-line x(t) providing the pivot step and let t = max {t 0; x(t) 0}. Clearly, c T x = x( t). If t = 0, then x = x(0) = x. If t > 0, then c T x = c T x( t) = z 0 + r v t > z 0 = c T x since r t > 0. 2 Consider that the simplex method iteraters over basis B ( 1),..., B (k), B (k+1) = B (1) with the corresponding solutions x (1),..., x (k), x (k+1) = x (1). By the previous observation holds that c T x (1) c T x (2) c T x (k) c T x (k+1) = c T x (1). Hence, c T x (1) = = c T x (k+1) and the previous observation implies that x (1) = = x (k+1). 3 For the sake of contradiction, we assume that the simplex method with Bland s pivot rule loops endlessly. Consider all basis in the loop. Let F be the set of all entering variables and let x v F be the variable with largest index. Let B be a basis in the loop just before x v enters. Note that variables of B \ F and N \ B are always basis and non-basis variables during the loop, respectively. Consider the following auxiliary problem. Maximize c T x subject to Ax = b x F \{v} 0 x v 0 x N\F = 0 x B\F R B\F ( ) We prove that ( ) has an optimal solution and it is also unbounded which is a contradiction. Jirka Fink Optimization methods 60

r v > 0 since x v is the entering variable r i 0 for every i (F N) \ {v} since x v is the improving variable with the smallest index (Bland s rule) For every solution x satisfying ( ) holds that c T x = z 0 + r T x N = z 0 + r T v x v + r T (F N)\{v} x (F N)\{v} + r T N\F x N\F z 0. Hence, the solution corresponding to the basis B is an optimal solution to ( ). Now, we prove that ( ) is unbounded. Let B be a basis in the loop just before x v leaves and let Q, p and r be the parameter of the simplex tableau corresponding to B. Let x u be the entering variable. Hence, r u > 0. Q v,u < 0 since v is the leaving variable. From Bland s rule it follows that Q i,u 0 for every i (F B ) \ {v} p F B = 0 since degenerated basis variables are zero Consider the half-line x(t) for t 0 where x u(t) = t and x N \{v}(t) = 0 and x B (t) = p + Q,v t. x (F N )\{u}(t) = 0 since non-basis variables remains zero x i (t) = p i + Q i,u t 0 for every i (F B ) \ {v} Hence, x F \{v} (t) 0 x v (t) = p v + Q v,u t 0 x N \F (t) = 0 since non-basis variables remains zero Hence, x(t) satisfies ( ) for every t 0 r u > 0 since x u is the entering variable c T x(t) = z 0 + r ut for t Hence, ( ) is unbounded. Jirka Fink Optimization methods 60

Initial feasible basis Linear programming problem in the equation form Maximize c T x subject to Ax = b and x 0. Assume that b 0 1 Auxiliary problem We add auxiliary variables y R m to obtain the auxiliary problem maximize y 1 y m subject to Ax + Iy = b a x, y 0. Observation Initial feasible basis for the auxiliary problem is B = {y 1,..., y m } with the initial tableau Observation The following statements are equivalent y = b Ax z = 1 T b + (1 T A)x 1 The original problem max { c T x; Ax = b, x 0 } has a feasible solution. 2 Optimal value of the objective function of the auxiliary problem is 0. 3 Auxiliary problem has a feasible solution satisfying y = 0. Jirka Fink Optimization methods 61

1 We multiply every equation with negative right hand side by 1. Jirka Fink Optimization methods 61

Outline 1 Linear programming 2 Linear, affine and convex sets 3 Simplex method 4 Duality of linear programming 5 Integer linear programming 6 Vertex Cover 7 Matching Jirka Fink Optimization methods 62

Duality of linear programming: Example Find an upper bound for the following problem Maximize 2x 1 + 3x 2 subject to 4x 1 + 8x 2 12 2x 1 + x 2 3 3x 1 + 2x 2 4 x 1, x 2 0 Simple estimates 2x 1 + 3x 2 4x 1 + 8x 2 12 1 2x 1 + 3x 2 1 (4x 2 1 + 8x 2 ) 6 2 2x 1 + 3x 2 = 1 (4x 3 1 + 8x 2 + 2x 1 + x 2 ) 5 3 What is the best combination of conditions? Every non-negative linear combination of inequalities which gives an inequality d 1 x 1 + d 2 x 2 h with d 1 2 and d 2 3 provides the upper bound 2x 1 + 3x 2 d 1 x 1 + d 2 x 2 h. Jirka Fink Optimization methods 63

1 The first condition 2 A half of the first condition 3 A third of the sum of the first and the second conditions Jirka Fink Optimization methods 63

Duality of linear programming: Example Consider a non-negative combination y of inequalities Observations Maximize 2x 1 + 3x 2 subject to 4x 1 + 8x 2 12 / y 1 2x 1 + x 2 3 / y 2 3x 1 + 2x 2 4 / y 3 x 1, x 2 0 Every feasible solution x and non-negative combination y satisfies (4y 1 + 2y 2 + 3y 3 )x 1 + (8y 1 + y 2 + 2y 3 )x 2 12y 1 + 3y 2 + 4y 3. If 4y 1 + 2y 2 + 3y 3 2 and 8y 1 + y 2 + 2y 3 3, then 12y 1 + 2y 2 + 4y 3 is an upper for the objective function. Dual program 1 Minimize 12y 1 + 2y 2 + 4y 3 subject to 4y 1 + 2y 2 + 3y 3 2 8y 1 + y 2 + 2y 3 3 y 1, y 2, y 3 0 Jirka Fink Optimization methods 64

1 The primal optimal solution is x T = ( 1 2, 5 4 ) and the dual solution is y T = ( 5 16, 0, 1 4 ), both with the same objective value 4.75. Jirka Fink Optimization methods 64

Duality of linear programming: General Primal linear program Maximize c T x subject to Ax b and x 0 Dual linear program Minimize b T y subject to A T y c and y 0 Weak duality theorem For every primal feasible solution x and dual feasible solution y hold c T x b T y. Corollary If one program is unbounded, then the other one is infeasible. Duality theorem Exactly one of the following possibilities occurs 1 Neither primal nor dual has a feasible solution 2 Primal is unbounded and dual is infeasible 3 Primal is infeasible and dual is unbounded 4 There are feasible solutions x and y such that c T x = b T y Jirka Fink Optimization methods 65

Dualization Every linear programming problem has its dual, e.g. Maximize c T x subject to Ax b and x 0 Primal program Maximize c T x subject to Ax b and x 0 Equivalent formulation Minimize b T y subject to A T y c and y 0 Dual program Minimize b T y subject to A T y c and y 0 Simplified formulation A dual of a dual problem is the (original) primal problem Minimize b T y subject to A T y c and y 0 Dual program -Maximize b T y subject to A T y c and y 0 Equivalent formulation -Minimize c T x subject to Ax b and x 0 Dual of the dual program -Minimize c T x subject to Ax b and x 0 Simplified formulation Maximize c T x subject to Ax b and x 0 The original primal program Jirka Fink Optimization methods 66

Dualization: General rules Primal linear program Dual linear program Variables x 1,..., x n y 1,..., y m Matrix A A T Right-hand side b c Objective function max c T x min b T y Constraints i-the constraint has y i 0 i-the constraint has y i 0 i-the constraint has = y i R x j 0 j-th constraint has x j 0 j-th constraint has x j R j-th constraint has = Jirka Fink Optimization methods 67

Linear programming: Feasibility versus optimality Feasibility versus optimality Finding a feasible solution of a linear program is computationally as difficult as finding an optimal solution. Using duality The optimal solutions of linear programs Primal: Maximize c T x subject to Ax b and x 0 Dual: Minimize b T y subject to A T y c and y 0 are exactly feasible solutions satisfying Ax b A T y c c T x b T y x, y 0 Jirka Fink Optimization methods 68

Complementary slackness Theorem Feasible solutions x and y of linear programs Primal: Maximize c T x subject to Ax b and x 0 Dual: Minimize b T y subject to A T y c and y 0 are optimal if and only if x i = 0 or A T i, y = c i for every i = 1,..., n and y j = 0 or A j, x = b j for every j = 1,..., m. Proof c T x = n c i x i i=1 n (y T A,i )x i = y T Ax = i=1 m y j (A j, x) j=1 m y j b j = b T y j=1 Jirka Fink Optimization methods 69

Proof of duality using simplex method with Bland s rule Notation Primal: Maximize c T x subject to Ax b and x 0 Primal with slack variables: Maximize c T x subject to Ā x = b and x 0 1 Dual: Minimize b T y subject to A T y c and y 0 Simplex tableau x B = p + Q x N z = z 0 + r T x N Simplex tableau is unique for every basis B Q = Ā 1 B ĀN p = Ā 1 B b z 0 = c T BĀ 1b Lemma B r = c N ( c T BĀ 1 B ĀN) T If B is a basis with an optimal solution x of the primal problem, then y = ( an optimal solution of the dual problem and c T x = b T y. 2 c TĀ 1 B )T is Jirka Fink Optimization methods 70