Introduction to linear programming

Similar documents
MAT016: Optimization

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

Assignment 1: From the Definition of Convexity to Helley Theorem

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

1 Review Session. 1.1 Lecture 2

Chapter 9: Systems of Equations and Inequalities

Lecture slides by Kevin Wayne

MATH 445/545 Test 1 Spring 2016

The Simplex Algorithm

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

9.1 Linear Programs in canonical form

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Lecture 15: Algebraic Geometry II

Systems of Nonlinear Equations and Inequalities: Two Variables

Optimization Methods in Management Science

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Solutions to Review Questions, Exam 1

Review Solutions, Exam 2, Operations Research

5 Flows and cuts in digraphs

CHAPTER 2. The Simplex Method

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Part 1. The Review of Linear Programming

TIM 206 Lecture 3: The Simplex Method

1 The linear algebra of linear programs (March 15 and 22, 2015)

Solving Systems of Equations Row Reduction

Chapter 3, Operations Research (OR)

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear programs, convex polyhedra, extreme points

1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0

Solving Linear Systems Using Gaussian Elimination

Linear and Integer Optimization (V3C1/F4C1)

OPERATIONS RESEARCH. Linear Programming Problem

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

The simplex algorithm

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

A Review of Linear Programming

4.4 Noetherian Rings

1 Seidel s LP algorithm

Systems of Equations and Inequalities. College Algebra

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Linear Programming Inverse Projection Theory Chapter 3

BBM402-Lecture 20: LP Duality

The Solution of Linear Systems AX = B

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

3.3 Real Zeros of Polynomial Functions

1 Positive definiteness and semidefiniteness

Chapter 1. Preliminaries

Bilinear and quadratic forms

4. Duality and Sensitivity

2.2 Some Consequences of the Completeness Axiom

OPRE 6201 : 3. Special Cases

Linear Systems and Matrices

CO 250 Final Exam Guide

The Simplex Method: An Example

Chapter 1: Systems of Linear Equations

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Mathematics High School Algebra

3E4: Modelling Choice

3 The Simplex Method. 3.1 Basic Solutions

Lecture 7: Introduction to linear systems

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Integer programming: an introduction. Alessandro Astolfi

Division Algorithm B1 Introduction to the Division Algorithm (Procedure) quotient remainder

The Simplex Algorithm and Goal Programming

Advanced Linear Programming: The Exercises

4.4 The Simplex Method and the Standard Minimization Problem

Linear Programming Redux

Name: Section Registered In:

Algorithmic Game Theory and Applications. Lecture 5: Introduction to Linear Programming

Week 3 Linear programming duality

LINEAR PROGRAMMING II

Understanding the Simplex algorithm. Standard Optimization Problems.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

B.3 Solving Equations Algebraically and Graphically

1.5 F15 O Brien. 1.5: Linear Equations and Inequalities

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Introduction to linear programming using LEGO.

MS-E2140. Lecture 1. (course book chapters )

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem

From Satisfiability to Linear Algebra

Basic Equations and Inequalities

Linear Algebra: A Constructive Approach

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

INTERNET MAT 117. Solution for the Review Problems. (1) Let us consider the circle with equation. x 2 + 2x + y 2 + 3y = 3 4. (x + 1) 2 + (y + 3 2

Precalculus Lesson 4.1 Polynomial Functions and Models Mrs. Snow, Instructor

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

MATH 4211/6211 Optimization Linear Programming

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations

2. What is the x-intercept of line B? (a) (0, 3/2); (b) (0, 3); (c) ( 3/2, 0); (d) ( 3, 0); (e) None of these.

Transcription:

Chapter 2 Introduction to linear programming 2.1 Single-objective optimization problem We study problems of the following form: Given a set S and a function f : S R, find, if possible, an element x S that minimizes (or maximizes) f. Such a problem is called a single-objective optimization problem, or simply an optimization problem. A compact way to write down such a problem is: min (or max) f(x) subject to x S, or more simply, min (or max) {f(x) : x S}. The set S is called the feasible set. The function f is called the objective function. An element y S is called a feasible solution. The objective function value of a feasible solution y S is the value f(y). Often, the set S is described as a set of elements of some other set satisfying certain conditions called constraints. For instance, if S = {x R : 0 < x, x 1}, then the inequalities 0 < x and x 1 are constraints. An optimization problem with S empty is said to be infeasible. An optimization problem that minimizes the objective function is called a minimization problem. An optimization problem that maximizes the objective function is called a maximization problem. For a minimization problem, an element x S is an optimal solution if f(x ) f(x) for all x S. (In other words, x is an element in S that minimizes f.) For a maximization problem, an element x S is an optimal solution if f(x ) f(x) for all x S. The objective function value of an optimal solution is called the optimal value of the problem. Remark. The difference between a minimization problem and a maximization problem is essentially cosmetic as minimizing a function is the same as maximizing the negative of the function. 3

Example 2.1.1. Let S be the set of all four-letter English words. Given a word w S, let f(w) be the number of the letter l in w. Consider the following optimization problem: max f(x) s.t. x S. In this problem, we want to find a four-letter English word having the maximum number of l s. What is the optimal value? Two obvious questions one could ask about an optimization problem are: 1. How do we find an optimal solution quickly (if one exists)? 2. How do we prove optimality? Not all optimization problems have optimal solutions. For example, max{x 3 : x > 0}. A maximization (minimzation) problem is unbounded if there exists a sequence of feasible solutions whose objective function values tend to ( ). An optimization problem that is not unbounded is called bounded. Not all bounded problems have optimal solutions. For example, min{e x : x R}. 2.2 Definition of a linear programming problem A (real-valued) function in variables x 1, x 2,..., x n is said to be linear if it has the form a 1 x 1 + a 2 x 2 + + a n x n where a 1,..., a n R. A constraint on the variables x 1, x 2,..., x n is said to be linear if it has the form a 1 x 1 + a 2 x 2 + + a n x n b, or a 1 x 1 + a 2 x 2 + + a n x n b, or a 1 x 1 + a 2 x 2 + + a n x n = b, where b, a 1, a 2,..., a n R. The first two types of linear constraints are called linear inequalities while the third type is called a linear equation. A linear programming (or linear optimization) problem is an optimization problem with finitely many variables (called decision variables) in which a linear function is minimized (or maximized) subject to a finite number of linear constraints. The feasible set of a linear programming problem is usually called the feasible region. 4

Example 2.2.1. max x 1 subject to x 1 + x 2 4 x 1 + x 2 3 x 1 0. Remark. When writing down an optimization problem, if a variable does not have its type specified, it is understood to be a real variable. A central result in linear optimization is the following: Theorem 2.1 (Fundamental Theorem of Linear Programming). Given a linear programming problem (P ), exactly one of the following holds: 1. (P ) is infeasible. 2. (P ) is unbounded. 3. (P ) has an optimal solution. We will see at least one proof of Theorem 2.1. 2.3 Linear programming formulation and graphical method Some real-life problems can be modelled as linear programming (LP) problems. In the case when the number of decision variables is at most two, it might be possible to solve the problem graphically. We now consider an example. Say you are a vendor of lemonade and lemon juice. Each unit of lemonade requires 1 lemon and 2 litres of water. Each unit of lemon juice requires 3 lemons and 1 litre of water. Each unit of lemonade gives a profit of $3. Each unit of lemon juice gives a profit of $2. You have 6 lemons and 4 litres of water available. How many units of lemonade and lemon juice should you make to maximize profit? Let x denote the number of units of lemonade to be made and y denote the number of units of lemon juice to be made. Note that x and y cannot be negative. Then, the number of lemons needed to make x units of lemonade and y units of lemon juice is x + 3y and cannot exceed 6. The number of litres of water needed to make x units of lemonade and y units of lemon juice is 2x + y and cannot exceed 4. The profit you get by making x units of lemonade and y units of lemon juice is 3x + 2y, which you want to maximize subject to the conditions we have listed. Hence, you want to solve the LP problem: maximize 3x + 2y subject to x + 3y 6 2x + y 4 x 0 y 0. 5

3 2 This problem can be solved graphically as follows. Take the objective function 3x+2y and turn it into an equation of a line 3x + 2y = z where z is a parameter. The normal vector of the line,, gives the direction in which the line moves as the value of z increases. (Why?) As we are maximizing, we want the largest z such that the line 3x + 2y = z intersects the feasible region. In Figure 2.1, the lines with z taking on the values 0, 4 and 6.8 have been drawn. From the picture, one can see that if z is greater than 6.8, the line defined by 3x + 2y = z will not intersect the feasible region. In other words, no point in the feasible region can have objective function value greater than 6.8. As the line 3x + 2y = 6.8 does intersect the feasible region, the optimal value is 6.8. To obtain an optimal solution, one simply takes a point in the feasible region that is also on the line defined by 3x + 2y = 6.8. There is only one such point: make 1.2 units of lemonade and 1.6 units of lemon juice to maximize profit. x y = 1.2 1.6. So you want to x>=0 2x+y<=4 Direction of improvement (1.2,1.6) x+3y<=6 y>=0 3x+2y=0 3x+2y=6.8 3x+2y=4 Figure 2.1: Graphical solution 6

One can in fact show algebraically that 6.8 is the optimal value. Notice that the sum of 0.2 times the first inequality and 1.4 times the second inequality is 3x + 2y 6.8. Now, all feasible solutions must satisfy this inequality because they satisfy the first two inequalities. Hence, any feasible solution must have objective function value at most 6.8. So 6.8 is an upper bound on the optimal value. But x y = 1.2 1.6 Hence, 6.8 must be the optimal value. is a feasible solution with objective function value equal to 6.8. Now, one might ask if it is always possible to find an algebraic proof like the one above for any linear programming problem. If the answer is yes, how does one find such a proof? We will see answers to this question later on. Now, consider the following LP problem: minimize 2x + y subject to x + y 3 x 2y 2 x 0 Exercise. Draw the feasible region of the above problem. Note that for any t 0, As t, the objective function value of x y = t t y 0. unbounded. Actually, one could also show unboundedness using is a feasible solution having objective function value t. in the course, we will see how to detect unboundedness algorithmically. x y = t t tends to. The problem is therefore x y = 2t + 2 t Exercise. By inspection, find a different set of solutions that also shows unboundedness. for t 0. Later 2.4 Exercises 1. Let (P ) denote the following linear programming problem: min 3x + 2y s.t. x + 3y 6 x 2y 1 2x + y 4 x, y 0 (a) Sketch the feasible set (that is, the set of feasible solutions) on the x-y plane. (b) Give an optimal solution and the optimal value. (c) Suppose that one adds a constraint to (P ) requiring that 2y be an integer. (Note that the resulting optimization problem will not be a linear programming problem.) Repeat parts (a) and (b) with this additional constraint. 7

2. Consider the example on lemonade and lemon juice in Section 2.3. Note that the optimal solution requires you to use a fractional number of lemons. Depending on the context, having to use a fractional number of lemons might not be realistic. (a) Suppose you are not allowed to use a fractional number of lemons but you are still allowed to make fractional units of lemonade and lemon juice. How many units of lemonade and lemon juice should you make to maximize profit? Justify your answer. (b) Suppose you are not allowed to make fractional units of lemonade and lemon juice. How many units of lemonade and lemon juice should you make to maximize profit? Justify your answer. 3. City A and city B have been struck by a natural disaster. City A has 1, 000 people to be rescued and city B has 2, 000. You are in charge of coordinating a rescue effort and the situation is as follows: each rescue team sent to city A must have exactly 4 rescue workers and requires 40 litres of fuel; each rescue team sent to city B must have exactly 5 rescue workers and requires 20 litres of fuel; each rescue team can rescue up to 30 people; you have 470 rescue workers and 2, 700 litres of fuel in total. (a) Show that given the resources that you have, not all 3, 000 people can be rescued. (b) Formulate an optimization problem using linear constraints and integer variables that maximizes the number of people rescued subject to the resources that you have. Use the following variables in your formulation: x A for the number of people rescued from city A; x B for the number of people rescued from city B; z A for the number of rescue teams sent to city A; z B for the number of rescue teams sent to city B. 8

Chapter 3 Systems of linear inequalities Before we attempt to solve linear programming problems, we need to address a basic question: How does one find a solution to a system of linear constraints? Note that it is sufficient to consider systems of the form Ax b where m and n are positive integers, A R m n, b R m, and x = [x 1,..., x n ] T is a vector of real variables because an inequality a T x α can be replaced with a T x α and an equation a T x = α can be replaced with a pair of inequalites a T x α and a T x α without changing the set of solutions. Another way to handle equations is as follows: Supppose that the system is Ax b Bx = d where m is a positive integer, B R m n, and d R m. One could first apply Gaussian elimination to row-reduce Bx = d and then use the pivot rows to eliminate the pivot variables in Ax b to obtain a system of inequalities without any of the pivot variables. The advantage with this method is that the resulting system has fewer variables and constraints. 3.1 Fourier-Motzkin elimination Fourier-Motzkin elimination is a classical procedure that can be used to solve a system of linear inequalities Ax b by eliminating one variable at a time. We firstillustrate the idea with an example. Considerthe following system of linear inequalities: 2x 1 x 2 + x 3 4 (1) x 1 2x 2 1 (2) x 1 + x 2 x 3 1 (3) 3x 1 2x 2 + 3x 3 6. (4) The system can be rewritten as: 9

1 2 x 2 + 1 2 x 3 2 x 1 (5) 2x 2 + 1 x 1 (6) x 1 x 2 + x 3 1 (7) x 1 2 3 x 2 x 3 2. (8) (5) was obtained from (1) by dividing both sides by 2 and rearranging the terms. (6) (8) were obtained similarly. Clearly, this new system has the same set of solutions as the original system. The system can be written compactly as: min{ 1 2 x 2 + 1 2 x 3 2, 2x 2 + 1} x 1 max{ x 2 + x 3 1, 2 3 x 2 x 3 2}. From this, one can see that the system (1) (4) has a solution if and only if or equivalently, min{ 1 2 x 2 + 1 2 x 3 2, 2x 2 + 1} max{ x 2 + x 3 1, 2 3 x 2 x 3 2}, 1 2 x 2 + 1 2 x 3 2 x 2 + x 3 1 1 2 x 2 + 1 2 x 3 2 2 3 x 2 x 3 2 has a solution. Simplifying the last system gives: 2x 2 + 1 x 2 + x 3 1 2x 2 + 1 2 3 x 2 x 3 2, 1 2 x 2 1 2 x 3 1 (9) 7 6 x 2 + 3 2 x 3 0 (10) x 2 x 3 2 (11) 8 3 x 2 + x 3 3. (12) Note that this system does not contain the variable x 1. The algebraic manipulations carried out ensure that the system (1) (4) has a solution if and only if the system (9) (12) does. Moreover, given any x 2 and x 3 satisfying (9) (12), one can find an x 1 such that x 1, x 2, x 3 together satisfy (1) (4). One can generalize the example above and obtain a procedure for eliminating any variable in a system of linear inequalities. The correctness of the following algorithm is left as an exercise. Algorithm 3.1 (Fourier-Motzkin Elimination). Input: An integer k {1,..., n} and a system of linear inequalities n a ij x j b i i = 1,..., m. j=1 10

Output: A system of linear inequalities k 1 n a ij x j + a ij x j b i i = 1,..., m j=1 j=k+1 such that if x 1,..., x n is a solution to the system in the input, then x 1,..., x k 1, x k+1,... x n is a solution to the system in the output, and if x 1,..., x k 1, x k+1,... x n is a solution to the system in the output, then there exists x k such that x 1,..., x n is a solution to the system in the input. Steps: 1. Let K = {1,..., n}\{k}. Let P = {i : a ik > 0}, N = {i : a ik < 0}, and Z = {i : a ik = 0}. For each i P, divide both sides of the inequality n a ij x j b i j=1 by a ik to obtain f ij x j + x k d i. j K For each i N, divide both sides of the inequality n a ij x j b i by a ik to obtain 2. Output the system j=1 f ij x j x k d i. j K (f ij + f i j)x j d i + d i for all i P and all i N, j K a ij x j b i for all i Z. j K Example 3.1.1. 2x 1 x 2 + x 3 4 (1) x 1 2x 2 1 (2) x 1 + x 2 x 3 1 (3) 3x 1 2x 2 + 3x 3 6. (4) We first eliminate x 1 using Fourier-Motzkin elimination: For each linear inequality in which the coefficient of x 1 is nonzero, we divide by the absolute value of the coefficient of x 1. 11

x 1 1 2 x 2 + 1 2 x 3 2 (5) x 1 2x 2 1 (6) x 1 + x 2 x 3 1 (7) x 1 2 3 x 2 + x 3 2. (8) Adding (5) and (7) gives 1 2 x 2 1 2 x 3 1. Adding (5) and (8) gives 7 6 x 2 + 3 2 x 3 0. Adding (6) and (7) gives x 2 x 3 2. Adding (6) and (8) gives 8 3 x 2 + x 3 3. Hence, the system with x 1 eliminated is: 1 2 x 2 1 2 x 3 1 (9) 7 6 x 2 + 3 2 x 3 0 (10) x 2 x 3 2 (11) 8 3 x 2 + x 3 3. (12) We now eliminate x 2. As before, for each linear inequality in which the coefficient of x 2 is nonzero, we divide by the absolute value of the coefficient of x 2. x 2 x 3 2 (13) x 2 + 9 7 x 3 0 (14) x 2 x 3 2 (15) x 2 + 3 8 x 3 9 8. (16) There is only one linear inequality with a negative x 2 coefficient and three linear inequalities with a positive x 2 coefficient. Hence, we derive three new linear inequalites. The new system is: 2 7 x 3 2 (17) 2x 3 0 (18) 5 8 x 3 7 8. (19) Now, observe that 7 (17) + (18) gives 0 14, which is absurd. So the original system has no solution. One can in fact obtain a nonnegative linear combination of inequalities (1) (4) that gives a 12

contradiction by tracing our derivations backwards. Note that 0 14 7 (17) + (18) 7 [(13) + (14)] + [(13) + (15)] 8 (13) + 7 (14) + (15) 16 (9) + 6 (10) + (11) 16 [(5) + (7)] + 6 [(5) + (8)] + [(6) + (7)] 22 (5) + (6) + 17 (7) + 6 (8) 11 (1) + (2) + 17 (3) + 2 (4). Remark. In the previous example, each time we apply the Fourier-Motzkin elimination to eliminate a variable, the variable to eliminate has a positive coefficient in some inequality and a negative coefficient in some other inequality. What if the coefficients of the variable to eliminate are either all nonnegative or all nonpositive? In this case, we simply do not derive any new inequality and we form the new system by taking the inequalites in the original system that do not contain the variable to eliminate. For example, all the x 1 coefficients are nonnegative in the system x 1 + x 2 2 3x 1 2x 2 0 x 2 2. The new system with x 1 eliminated is simply x 2 2. 3.2 Theorems of the alternative The previous section contains an example that has no solution because there is a nonnegative linear combination of the linear inequalities that gives a contradiction. In general, such a nonnegative linear combination exists whenever a system of linear inequalities of the form Ax b has no solution. The converse is also true. This is the content of the next theorem. Theorem 3.1 (Farkas Lemma). Let m and n be positive integers. Let A R m n and b R m. A system Ax b of m inequalites in the variables x 1,..., x n has a solution if and only if there does not exist y R m such that y 0, y T A = 0, y T b > 0. Proof. Suppose that there exists y R m such that y 0, y T A = 0, y T b > 0. 13

Suppose that there also exists x satisfying Ax b. As y 0, we can multiply both sides of the system Ax b on the left by y T to obtain y T Ax y T b. But y T Ax = (y T A)x = 0 and y T b > 0 by assumption. So we have 0 > 0, which is impossible. So there is no solution to the system Ax b. The the converse can be proved by induction on n. Details of the proof are left as an exercise. Theorem 3.1 is an important classical result in linear programming. It can be used to derive the following well-known result in linear algebra. Corollary 3.2. A system Ax = b of m equations has a solution if and only if there does not exist y R m such that y T A = 0, y T b 0. Proof. Suppose that there exists y R m such that y T A = 0 and y T b 0. Multiplying both sides of Ax = b by y T, we obtain y T Ax = y T b. But the left-hand side is 0 while the right-hand side is not. This is impossible. So, Ax = b cannot have any solution. [ ] We now prove the converse. Suppose that Ax = b has no solution. Let A A = and A [ ] b b =. Then the system Ax = b is equivalent to A x b and so A x b has no solution. b By Theorem 2, there exist u, v R m such that [ ] u 0, [u T v T ]A = 0, [u T v T ]b > 0, v or equivalently, u, v 0, (u v) T A = 0, (u v) T b > 0. Setting y = u v, we obtain This completes the proof. y T A = 0, y T b 0. 3.3 Exercises 1. Consider the following system of linear inequalities: x 1 + x 2 1 2x 1 x 2 x 3 0 x 2 x 3 0 x 3 3. 14

(a) Use Fourier-Motzkin elimination to eliminate the variables x 2 and x 3. (b) Find a solution to the system such that x 1 is as small as possible. 2. Consider the following system of linear constraints: x 1 + x 2 = 4 x 1 x 2 + 2x 3 = 2 2x 1 x 2 x 3 0 x 1, x 2, x 3 0. Does the system have a solution? If so, find one. If not, give a proof. 3. Prove that the Fourier-Motzkin elimination algorithm is correct. 4. Complete the proof of Theorem 3.1. 5. Let A R m n, b R m, and x = [x 1,..., x n ] T be a vector of n variables. Use Theorem 3.1 to prove that the system Ax b, x 0 has a solution if and only if there does not exist y R m such that y 0, y T A 0, and y T b > 0. [ ] [ ] (Hint: Consider the system A x b where A A = and b b =.) I 0 15