MS-E2140. Lecture 1. (course book chapters )

Similar documents
MS-E2140. Lecture 1. (course book chapters )

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

UNIT-4 Chapter6 Linear Programming

Introduction to LP. Types of Linear Programming. There are five common types of decisions in which LP may play a role

Exam 3 Review Math 118 Sections 1 and 2

IE 5531: Engineering Optimization I

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

2. Linear Programming Problem

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

IE 400 Principles of Engineering Management. Graphical Solution of 2-variable LP Problems

LINEAR PROGRAMMING: A GEOMETRIC APPROACH. Copyright Cengage Learning. All rights reserved.

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

Linear Programming Duality

Lecture 1 Introduction

4.6 Linear Programming duality

Optimisation and Operations Research

1 Review Session. 1.1 Lecture 2

Math 407 Linear Optimization

Section 4.1 Solving Systems of Linear Inequalities

UNIVERSITY OF KWA-ZULU NATAL

II. Analysis of Linear Programming Solutions

The Dual Simplex Algorithm

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Introduction to linear programming using LEGO.

Chap6 Duality Theory and Sensitivity Analysis

Math 5593 Linear Programming Week 1

MATH2070 Optimisation

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Review Questions, Final Exam

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs


UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course

Lectures 6, 7 and part of 8

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

Linear Programming. H. R. Alvarez A., Ph. D. 1

AM 121: Intro to Optimization Models and Methods

Teaching Duality in Linear Programming - the Multiplier Approach

Operations Research. Duality in linear programming.

Introduction to Operations Research. Linear Programming

Chapter 2 Introduction to Optimization and Linear Programming

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Graphical and Computer Methods

1. Introduce slack variables for each inequaility to make them equations and rewrite the objective function in the form ax by cz... + P = 0.

Introduction to Operations Research

Optimisation. 3/10/2010 Tibor Illés Optimisation

Sensitivity Analysis and Duality

LINEAR PROGRAMMING II

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Part 1. The Review of Linear Programming Introduction

Duality Theory, Optimality Conditions

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Discrete Optimization

Understanding the Simplex algorithm. Standard Optimization Problems.

56:270 Final Exam - May

Chapter 4 The Simplex Algorithm Part I

Simplex tableau CE 377K. April 2, 2015

Solutions to Review Questions, Exam 1

Planning and Optimization

Today: Linear Programming

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

MA 162: Finite Mathematics - Section 3.3/4.1

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Linear programs, convex polyhedra, extreme points

Advanced Linear Programming: The Exercises

Part 1. The Review of Linear Programming

NATIONAL OPEN UNIVERSITY OF NIGERIA SCHOOL OF SCIENCE AND TECHNOLOGY COURSE CODE: MTH 309 COURSE TITLE: OPTIMIZATION THEORY

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

MATH 445/545 Test 1 Spring 2016

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Chapter 3 Introduction to Linear Programming PART 1. Assoc. Prof. Dr. Arslan M. Örnek

CO 250 Final Exam Guide

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

CS711008Z Algorithm Design and Analysis

Sensitivity Analysis and Duality in LP

15-780: LinearProgramming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION

Solution Cases: 1. Unique Optimal Solution Reddy Mikks Example Diet Problem

Linear Functions, Equations, and Inequalities

Another max flow application: baseball

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Review for Final Review

DOWNLOAD PDF LINEAR OPTIMIZATION AND APPROXIMATION

ECE 307- Techniques for Engineering Decisions

An introductory example

Introduction. Formulating LP Problems LEARNING OBJECTIVES. Requirements of a Linear Programming Problem. LP Properties and Assumptions

The Transportation Problem

36106 Managerial Decision Modeling Linear Decision Models: Part II

Available online Journal of Scientific and Engineering Research, 2015, 2(3): Research Article

Modern Logistics & Supply Chain Management

OPTIMIZATION. joint course with. Ottimizzazione Discreta and Complementi di R.O. Edoardo Amaldi. DEIB Politecnico di Milano

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Optimization. Broadly two types: Unconstrained and Constrained optimizations We deal with constrained optimization. General form:

Practice Questions for Math 131 Exam # 1

...(iii), x 2 Example 7: Geetha Perfume Company produces both perfumes and body spray from two flower extracts F 1. The following data is provided:

Transcription:

Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form Graphical representation of linear programming problems Modeling absolute values Modeling piecewise linear convex functions v. 1.1

Motivations Linear Programming (LP) problems form an important class of optimization problems with many practical applications. Linear Programming has applications for example in Production planning, resource allocation, investment decisions, military operations, scheduling, transportation and logistics, inventory management, scheduling, game theory... Efficient LP solution methods have been developed, and nowadays routinely used within optimization packages to solve even very large problems The Simplex algorithm for solving Linear Programs (Dantzig, 1947) is considered one of the top 10 algorithms developed in the 20-th century (http://www.siam.org/pdf/news/637.pdf) LP theory and duality form the basis to the development of more sophisticated methods for solving hard combinatorial optimization problems

Hystorical background The rapid and systematic development of Linear Programming as a practical tool for modeling and solving optimization problems started with the invention of the Simplex algorithm in 1947 The Simplex algorithm was invented by Dantzig to solve military planning problems, and is one of the most practically effective methods for solving LPs (earlier methods are due to Fourier, 1824, and de la Vallee Poussin, 1910) Linear Programming models were also studied in economics in the late 30s by Leonid Kantorovich. Tjalling Koopmans (nobel prize in 1975 with Kantorovich) and Wassily Leontief (nobel prize in 1973) also played important roles The work of von Neumann in game theory (1928) and duality also proved to have strong connections with the fundamental theory of linear programming Paper: Dantzig, G. (2002). Linear Programming, reprinted in Operations Research, Vol. 50, No.1, pp. 42-47, http://www.jstor.org/stable/3088447

Linear programming problems Minimize or maximize a linear objective function subject to a set of m linear constraints Cost coefficient Minimize z = c 1 x 1 + c 2 x 2 + + c n x n subject to Objective function Can be, or = a 11 x 1 + a 12 x 2 + + a 1n x n b 1 a 21 x 1 + a 22 x 2 + + a 2n x n b 2............ Constraints a m1 x 1 + a m2 x 2 + + a mn x n = b m x 1, x 2, x n 0 Decision variables Non-negativity constraints In general, not all variables may be required to be 0, or some may be required to be 0

A Linear Programming problem (LP) can be expressed in matrix form Minimize z = c 1 x 1 + c 2 x 2 + + c n x n subject to a 11 x 1 + a 12 x 2 + + a 1n x n b 1............ a m1 x 1 + a m2 x 2 + + a mn x n b m x 1, x 2, x n 0 Minimize z = c x subject to Ax b x 0 Notation: c = c 1, c 2,, c n R n x = (x 1, x 2,, x n ) R n b = (b 1, b 2,, b m ) R m A = a 11 a 1n R m n a m1 a mn Constraint matrix We will write a i to denote the i-th row of A so that the i-th constraint can be written as a i x b i a 1 Note: The symbol denotes transposition

Terminology. For an LP with a minimization objective function: A vector x that satisfies all the constraints is called feasible solution The set of all the feasible solutions is called feasible region A feasible solution x that minimizes the objective function (i.e., such that c x c x for any feasible solution x) is called optimal solution The value c x is called optimal cost x 2 (2) Minimize x 1 3x 2 s.t. x 1 x 2 6 constraint (1) (3) Feasible region (1) x 1 2x 2 8 x 1 + x 2 2 constraint (2) constraint (3) x 1 x 1, x 2 0

Examples of LPs Product blending. A manufacturer of plastics is planning to blend a new product by mixing four chemical compounds. Each compound contains three chemicals A, B, and C in different percentages. The composition and unit cost of the each compounds is: Each column gives the composition and cost of one compound Comp. 1 Comp. 2 Comp. 3 Comp. 4 % of A 30 10 35 25 % of B 20 65 35 40 % of C 40 15 25 30 Cost/Kg. 20 30 20 30 The new product must contain 25% of element A, at least 35% of element B, and at least 20% of element C Moreover, to avoid side effects compounds 1 and 2 cannot exceed 25% and 30% of the total, respectively

What is the cheapest way of blending the new product? Decision variables x i : fraction of compound i (i = 1,, 4) used to produce one unit of the new product Mathematical formulation minimize z = 20x 1 + 30x 2 + 20x 3 + 30x 4 s.t. x 1 + x 2 + x 3 + x 4 = 1 30x 1 + 10x 2 + 35x 3 + 25x 4 = 25 20x 1 + 65x 2 + 35x 3 + 40x 4 35 40x 1 + 15x 2 + 25x 3 + 30x 4 20 x 1 0.25, x 2 0.30 x 1, x 2, x 3, x 4 0 Cost to produce one unit of the new product % of element A % of element B % of element C Maximum % of comp. 1, 2 Non-negativity

The problem in matrix form is: x 1 Minimize z = c x s.t. A 1 x = b 1 1 1 1 1 30 10 35 25 x 2 x 3 = 1 25 A 2 x b 2 A 3 x b 3 A 1 x 4 x x 1 b 1 x 0 where 1 1 1 1 A 1 = 30 10 35 25 b 1 = 1 25 20 65 35 40 40 15 25 30 A 2 x 2 x 3 x 4 x 35 20 b 2 A 2 = 20 65 35 40 40 15 25 30 b 2 = 35 20 1 0 0 0 0 1 0 0 x 1 x 2 x 3 0.25 0.30 A 3 = 1 0 0 0 0 1 0 0 b 3 = 0.25 0.30 A 3 x 4 x b 3 c = 20 30 20 30

Production planning. A food firm is planning the production for the next 4 months, and can use a warehouse to stock food in each month. The maximum storage capacity of the warehouse is 100 tons, and each ton of stock at the end of any month has a cost of 5 EUR On month i, there is a production cost c i for each ton and a maximum production of p i tons is possible An extra production of q i tons is possible at extra cost: Each ton produced in excess of p i in month i has an additional cost of e i The firm has contracted to provide d i tons on each month i The warehouse is empty at the beginning of the first month, and must be empty at the end of the last month Finally, the regular production in each month must be at last 10% of the total production of the first three months (balanced production)

Row i gives the monthly production costs, demand, and capacities of month i Month (i) Production cost (c i ) Demand (d i ) Max. production (p i ) Max. extra production (q i ) Extra production cost (e i ) 1 10 120 140 50 6 2 10 160 150 75 6 3 10 300 140 70 6 4 10 200 160 80 6 Objective: Minimize the costs over the four months Decision variables. x i : Regular production on month i = 1,, 4 s i : Extra production on month i = 1,, 4 y i : Warehouse stock at the end of month i = 1,, 3 Note: y 4 must always be 0, we don t need it explicitly in the model (the warehouse must be empty at the end of the last month)

Mathematical formulation 4 minimize z = (10x i + 16s i + 5y i ) Total production cost i=1 s.t. x 1 + s 1 = 120 + y 1 x 2 + s 2 + y 1 = 160 + y 2 x 3 + s 3 + y 2 = 300 + y 3 The total production in month i must equal the demand of that month plus the stock at the end of the month x 4 + s 4 + y 3 = 200 x i (s 1 + x 1 + s 2 + x 2 + s 3 + x 3 ) 0.1, i = 1,, 4 Balanced production x 1 140, x 2 150, x 3 140, x 4 160 s 1 50, s 2 75, s 3 70, s 4 80 x i 0, s i 0, i = 1,, 4, y i 0, i = 1,, 3 Maximum regular and extra production Non-negativity

Problem manipulations LPs can be equivalently expressed in different forms: Minimize c x is equivalent to Maximize c x An equality constraint a 11 x 1 + + a 1n x n = b 1 can be equivalently replaced by the two inequalities: a 11 x 1 + + a 1n x n b 1 and a 11 x 1 + + a 1n x n b 1 An inequality constraint a 11 x 1 + + a 1n x n b 1 can be replaced by the equivalent inequality (a 11 x 1 + + a 1n x n ) b 1 An inequality constraint a 11 x 1 + + a 1n x n b 1 is equivalent to a 11 x 1 + + a 1n x n + s 1 = b 1, where s 1 0 is a new variable called slack variable Similarly, a 11 x 1 + + a 1n x n b 1 becomes a 11 x 1 + + a 1n x n s 1 = b 1, where s 1 0

Any free variable x i (i.e., not restricted to be 0 or 0) can be replaced by the expression (x + i x i ) where x + i 0 and x i 0 Examples minimize z = 2x 1 + 5x 2 s. t 3x 1 + 2x 2 6 2x 1 + 9x 2 8 x 1 0 free Equivalent minimize z = 2x 1 + 5(x + 2 x 2 ) s. t 3x 1 + 2(x + 2 x 2 ) 6 2x 1 + 9(x + 2 x 2 ) 8 x 1, x + 2, x 2 0 minimize z = 2x 1 x 2 + 4x 3 minimize z = 2x 1 x 2 + 4x 3 s. t x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0, x 3 0 Equivalent s. t x 1 x 2 x 4 2 3x 2 x 3 5 3x 2 + x 3 5 x 3 +x 4 3 x 1 0, x 3 0

Standard form problems By using the previous transformations we can always express any LP in the following form, called standard form All variables must be non-negative Minimize z = c x s.t. Ax = b, x 0 All constraints are equality constraints Starting from any LP, we can put it in standard form by: 1. Replacing each free variable x i with (x i + x i ) where x i + 0, x i 0 2. Transforming any inequality constraint into an equality constraint by adding slack variables Any LP and its standard form are equivalent: Given a feasible solution to the original LP we can construct a feasible solution to its standard form with same cost, and vice versa

Example minimize z = 2x 1 + 4x 2 s. t x 1 + x 2 3 3x 1 + 2x 2 = 14 x 1 0 Standard form minimize z = 2x 1 + 4x + 2 4x 2 s. t x 1 + x + 2 x 2 s 1 = 3 3x 1 + 2x + 2 2x 2 = 14 x 1, x + 2, x 2, s 1 0 The Simplex algorithm is designed to solve LPs in standard form This is because it is based on the following operations: 1. multiply the coefficients and right hand side of a constraint by a nonnull real number 2. apply operation 1 to a constraint and sum the result to another one

The operations 1 and 2 when applied to a system of linear equations leave unchanged the set of feasible solutions However, when applied to a system of inequalities they do change the set of solutions Example (a) (b) x 1 2x 2 0 x 2 0 Multiply (b) by 2 and add to (a) x 1 0 x 2 0 x 2 x 2 x 1 x 1

Graphical representation of an LP By transforming any equality constraint into an inequality, we can also always rewrite an LP in the following general form where eventual non-negativity constraints are also included in the definition of A minimize z = 3x 1 2x 2 + x 3 s.t. 2x 1 x 2 1 x 2 + x 3 5 x 1 0 x 3 0 Minimize z = c x s.t. Ax b A = 2 1 0 0 1 1 1 0 0 0 0 1 x = (x 1 x 2 x 3 ) b = (1 5 0 0) Each constraint of an LP in this form with n variables defines a region of R n, called halfspace, containing all x R n satisfying the constraint The feasible region is then the intersection of all these halfspaces

For LPs in general form with two or three variables we can visualize the feasible region, and even solve them graphically: minimize z = x 1 3x 2 x 1 x 2 6 (1) x 1 2x 2 8 (2) x 1 + x 2 2 (3) x 1 0 4 x 2 0 (5) Halfspace defined by x 1 + x 2 2 x 2 Halfspace defined by x 1 + x 2 2 x 1 + x 2 = 2 x 1 The feasible region is the intersection of the halfspaces defined by (1) (5) x 2 For any value z, the set of all solutions x with (3) (5) (2) feasible region (1) (4) cost z forms a line c x = x 1 3x 2 = z (sometimes called isoprofit line) This line is perpendicular to the vector c = ( 1, 3) c x 1

x 2 Optimal solution x = ( 4 3 14 3 ), z = 38/3 c x = z x 1 3x 2 = 38/3 (z = 38/3) x 1 3x 2 = 12 (z = 12) c = ( 1 3) x 1 3x 2 = 6 (z = 6) x 1 3x 2 = 0 (z = 0) Isoprofit line for z = 0 The value z decreases along the direction c so minimizing z corresponds to moving the line z = x 1 3x 2 in the direction of c The minimum value of z is obtained when the line cannot be moved further without leaving the feasible region When this happens the line intersects a corner point of the feasible region which is an optimal solution

It is not always the case that an LP has a unique optimal solution Consider the following LP: minimize z = c 1 x 1 + c 2 x 2 x 1 + x 2 1 x 1 0 x 2 0 Depending on the choice of c, the following cases can occur: x 1 x 1 + x 2 = 1 Feasible region x 2 Unique optimal solution Bounded set of alternative optimal solutions x 1 x is the only optimal solution x 1 Any solution x = (0 x 2 ) with 0 x 2 1 is optimal (0 1) c = (1 1) x = (0 x 2 ) with 0 x 2 1 x = (0 0) c x = x 1 + x 2 = 0 (z = 0) x 2 (0 0) c = (1 0) x 2

Unbounded set of alternative optimal solutions Any solution x = (x 1 0) with x 1 0 is optimal Unbounded optimal cost (unbounded problem) There is not an optimal solution: z for x 1, x 2 the optimal cost is c = (0 1) c x = x 1 x 2 = z x x = (x 1 0) with x 1 0 c = ( 1 1) z Finally, the feasible region can be empty, in which case there are not feasible solutions and the LP is infeasible: minimize z = c 1 x 1 + c 2 x 2 x 1 + x 2 1 x 1 x 2 2 x 1 0 x 2 0 x 1 x 2 2 x 1 + x 2 1

Modeling absolute values LPs can be used to model situations where variables represent absolute values under some assumptions on the sign of their coefficients. Consider the problem: minimize n i=1 c i x i s.t. Ax b where x = (x 1,, x n ) and c i 0, i = 1,, n We can note that x i is the smallest number y i satisfying y i x i and y i x i, so we can rewrite the problem as the LP minimize n i=1 c i y i s.t. Ax b y i x i, i = 1,, n y i x i, i = 1,, n This is correct because c i 0 and we are minimizing: An optimal solution must have y i = max x i, x i = x i ; Otherwise we could reduce y i and obtain a feasible solution with lower cost

Another possibility to rewrite the same problem as an LP is to: replace x i with (x i + x i ) where x i + 0, x i 0 replace x i with (x i + + x i ) In this way we obtain the LP: Example: for x i = 4 we can write x + i = 0 and x i = 4 so that x i = x + i x i and x i = x + i + x i minimize n i=1 c i x i + + x i s.t. Ax + Ax b x +, x 0 where x + = (x + 1,, x + n ) and x = (x 1,, x n ) Since we are minimizing and c i 0, an optimal solution must have x + i = 0 or x i = 0, i, because otherwise we could reduce both x + i and x i by the same amount and obtain a better feasible solution

Example. Consider the problem: Minimize z = 2 x 1 + x 2 s.t. x 1 + x 2 4 The two alternative reformulations to transform it into an LP are: Minimize z = 2y 1 + x 2 s.t. x 1 + x 2 4 y 1 x 1 y 1 x 1 Minimize z = 2x + 1 + 2x 1 + x 2 s.t. x + 1 x 1 + x 2 4 x + 1 0 x 1 0

Modeling piecewise linear functions A function f: R n R is called convex if for every x, y R n, and for every λ [0,1], we have f λx + 1 λ y λf x + (1 λ)f y f is concave if it satisfies the above with " " replaced by " Note: All points λx + 1 λ y lie on the line segment joining x and y; Informally, f is convex if its graph lies below the line connecting x and y f x λf x + (1 λ)f y, for λ [0,1] f x f y f y x y x y A convex function f λx + (1 λ)y, for λ [0,1] A concave function

A function of the form f x = max (c i x + d i ) is called piecewise i=1,,m linear convex (or piecewise linear concave if max is replaced by min). Example with m = 3, and x R 1 f x = max 1 x, 2 x + 2, x + 5 is piecewise linear convex 2 5 2 Piecewise linear convex (or concave) functions can be used to approximate convex (or concave) functions f x f x x + 5 2 1 2 x 1 2 5 x + 2 A piecewise linear convex function x A convex function approximated by a piecewise linear convex function x

LPs can be used to model: the minimization of a piecewise linear convex function (1) Minimize z = max (c i x + d i ) i=1,,m s.t. Ax b the maximization of a piecewise linear concave function (2) Maximize z = min (c i x + d i ) i=1,,m s.t. Ax b The idea is to introduce z explicitly as a variable and impose z c i x + d i or z c i x + d i, i. Problems (1), (2) become the LP: (1) Minimize z (2) Maximize z s.t. Ax b z c i x + d i, i = 1,, m s.t. Ax b z c i x + d i, i = 1,, m

Example. Consider a firm producing and selling a product. A typical strategy is to introduce economies of scale to incentivate bulk buys Suppose the unit selling price is set to 10 EUR for the first 1000 units, but is reduced to 7 EUR for any amount in excess to that Suppose an LP is used in planning the production with variables x i representing the amount produced (and sold) in month i = 1,, n, subject to a set of constraints Ax b (with x = (x 1,, x n )) The profit p i that can be made from the sales in month i is: p i (x i ) = 10x i, if 0 x i 1,000 10 1,000 + 7(x i 1,000), if x i 1,000 which is equivalent to = 3000 + 7x i p i x i = min 10x i, 10,000 + 7 x i 1,000 = min 10x i, 7x i + 3000

Example (cont.) The problem of maximizing the profit over the n months can then be modeled as the maximization of a piecewise linear concave function: which can be rewritten as the LP Maximize i=1,,n p i (x i ) s.t. Ax b Maximize i=1,,n p i s.t. Ax b p i 10x i, i = 1,, n p i 7x i + 3000, i = 1,, n