Introduction to Mathematical Programming

Similar documents
Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Part 1. The Review of Linear Programming

Optimisation and Operations Research

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Chap6 Duality Theory and Sensitivity Analysis

4. Duality and Sensitivity

Linear Programming Inverse Projection Theory Chapter 3

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear Programming: Chapter 5 Duality

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Summary of the simplex method

IE 5531: Engineering Optimization I

Systems Analysis in Construction

MATH2070 Optimisation

Review Solutions, Exam 2, Operations Research

Duality Theory, Optimality Conditions

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Chapter 1 Linear Programming. Paragraph 5 Duality

4.6 Linear Programming duality

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Chapter 3, Operations Research (OR)

Linear and Combinatorial Optimization

Summary of the simplex method

UNIT-4 Chapter6 Linear Programming

Lecture 5. x 1,x 2,x 3 0 (1)

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

1 Review Session. 1.1 Lecture 2

II. Analysis of Linear Programming Solutions

The Simplex Algorithm

Understanding the Simplex algorithm. Standard Optimization Problems.

Lecture 11: Post-Optimal Analysis. September 23, 2009

MAT016: Optimization

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Sensitivity Analysis and Duality

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Farkas Lemma, Dual Simplex and Sensitivity Analysis

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Lecture Note 18: Duality

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Civil Engineering Systems Analysis Lecture XII. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Part 1. The Review of Linear Programming Introduction

Linear Programming, Lecture 4

Lecture 2: The Simplex method

Conic Linear Optimization and its Dual. yyye

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

F 1 F 2 Daily Requirement Cost N N N

How to Take the Dual of a Linear Program

Lecture Notes 3: Duality

Benders' Method Paul A Jensen

TIM 206 Lecture 3: The Simplex Method

The Strong Duality Theorem 1

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems


Lecture 7 Duality II

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Ω R n is called the constraint set or feasible set. x 1

Sensitivity Analysis

Chapter 1. Preliminaries

Part 1. The Review of Linear Programming

4. The Dual Simplex Method

Lecture 10: Linear programming duality and sensitivity 0-0

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Sensitivity Analysis and Duality in LP

Lectures 6, 7 and part of 8

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Simplex Method for LP (II)

MATH 4211/6211 Optimization Linear Programming

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

The dual simplex method with bounds

3. Linear Programming and Polyhedral Combinatorics

Special cases of linear programming

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Benders Decomposition

Week 3 Linear programming duality

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

ENGI 5708 Design of Civil Engineering Systems

Discrete Optimization

IE 400: Principles of Engineering Management. Simplex Method Continued

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

In Chapters 3 and 4 we introduced linear programming

Duality and Projections

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

OPRE 6201 : 3. Special Cases

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

9.1 Linear Programs in canonical form

Linear Programming Redux

New Artificial-Free Phase 1 Simplex Method

AM 121: Intro to Optimization Models and Methods

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Lecture 11 Linear programming : The Revised Simplex Method

ECE 307- Techniques for Engineering Decisions

Transcription:

Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16

Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 / 16

The Setting Consider the linear programming problem, Minimize : c x, subject to : A x = b, x 0. The feasible region (the polyhedral set) S = { x R n : A x = b, x 0} is put into its standard form. If A x b is given, we can add slack variables, y R m such that A x + y = b with y 0. If A x b is given, we can add surplus variables, z R m such that A x z = b with z 0. If any part of x is unrestricted, e.g., x j, then consider x j = x + j with x + j, x j 0. we also assume that S is non-empty and the rank of A is m. x j Ming Zhong (JHU) AMS Fall 2018 3 / 16

The Algorithm The Algorithm is given (and broken down) as follows Step 0: Find a starting extreme point x with basis B and set k = 1. B = { a j } for some j J {1, 2,, N}, and a j are column vectors of A. Therefore N = { a j } for j J; the j will refer to the index of columns in A. In order to save memory, one can just use B and N to save the indices. How to find a starting extreme point x with the given basis B will be discussed later. Step 1: Let x k be an extreme associated with the basis B k. Calculate c B B 1 N c N, if this vector is nonpositive, stop; x k is an optimal extreme point. Otherwise, pick the component of c B B 1 N c N which is the most positive (find the corresponding j). Ming Zhong (JHU) AMS Fall 2018 4 / 16

The Algorithm, cont. Continue on, Let y j = B 1 a j. If y j 0, stop; the objective value is unbounded along the ray, ( ) yj { x 1 + λ : λ 0}. e j j N corresponds to j, and e j is a vector of of zeros except for a 1 in position j. If y j 0, go to Step 2. Step 2: Compute the index l such that the following ratio is at its minimum (let bk = B 1 k b) min {( bk ) i : ( y j ) i 0}. 1 i m ( y j ) i Ming Zhong (JHU) AMS Fall 2018 5 / 16

The Algorithm, cont. A few more steps: Form the new extreme point as follows, for i = 1,, m and i l; ( x k+1 ) i = ( bk ) i ( bk ) l ( y j ) l ( y j ) i, ( x k+1 ) k = ( bk ) l ( y j ) l, and all other x k+1 values are equal to zero. Form the new basis by deleting the column a l from B and introducing a j in its place. Repeat Step 1. Ming Zhong (JHU) AMS Fall 2018 6 / 16

Discussion: the Initial Extreme point Recall that the simplex method starts with an initial extreme point, Finding an initial extreme point of the set S = { x R n : A x = b, x 0} involves decomposing A into B and N with B 1 0. An initial extreme point may be not conveniently available, we can overcome it by introducing artificial variables. Two methods, both involves putting into the standard form A x = b and x 0 and b 0 (if not, for sample b i < 0, replace this i th constraint by 1 of the original). Two-Phase Method Note that x R n and x a R m. We will add in an extra variable, A x + x a = b, x, x a 0. Ming Zhong (JHU) AMS Fall 2018 7 / 16

Two-Phase Method Continue on, Obviously, x = 0 and x a = b represents an extreme point. A feasible solution of the original system will be obtained only if x a = 0. We can use the simplex method to Minimize : u x a, subject to : A x + x a = b x, x a 0, where u if a vector of all 1 s. This is the Phase I problem. By the end of this problem, either x a 0 or x a = 0. If x a 0, we conclude that the original system is inconsistent (the feasible region is empty). If x a = 0, we would obtain an extreme point of the original system. Starting from this extreme point, Phase II for the original problem. Ming Zhong (JHU) AMS Fall 2018 8 / 16

Big-M Method Big-M Method We will use an artificial vector x a together with a large positive cost coefficient M > 0 (a scalar), so that each artificial variable will drop to zero. Minimize : c + M u x a, subject to : A x + x a = b x, x a 0, M should be picked very large. We can apply the Two-Phase Method to this new problem without specifying M, in this way, a x = 0 and a large M will be found after executing Phase I. Such M will most likely forces every If not, the original problem is not feasible. Ming Zhong (JHU) AMS Fall 2018 9 / 16

Duality in Linear Programming Consider the linear program in its standard form, Minimize : c x, subject to : A x = b, x 0. Let us refer to this as the primal problem P. The following is called the dual of the foregoing problem, D: Maximize : b y, subject to : We will discuss the relationship between P and D. y A y c, unrestricted Ming Zhong (JHU) AMS Fall 2018 10 / 16

Primal and Dual Problems Theorem Let the pair of linear programming P and D be as defined before, Then Proof. Weak duality result: c x b y for any feasible solution x to P and any solution feasible solution y to D. Unbounded-infeasible relationship: If P is unbounded, D is infeasible, and vice versa. Strong duality result: If both P and D are feasible, they both have optimal solutions with the same objective value. For any pair ( x, y), any feasible solutions to P and D, we have c x y A x = y b. Ming Zhong (JHU) AMS Fall 2018 11 / 16

The Proof, cont. Proof. Continue on, If P is unbounded, then D must be infeasible, or else, any feasible solution to D would provide a lower bound on the objective value for P (by previous proof). Similarly for D being unbounded and P is infeasible. Now suppose both P and D are both feasible, neither can be unbounded (previous proof), so they both have optimal solutions. ( ) xb Let x = be an optimal basic feasible solution to P, where x N x B = B 1 b and xn = 0. Ming Zhong (JHU) AMS Fall 2018 12 / 16

The Proof, cont. Proof. Moving on, ( Consider y = c B cb B 1, where c = c N We have, ). y A = c B B 1 [B, N] = [ c B, c B B 1 N] [ c B, c N ], since c B B 1 N c N by the optimality condition of the given basis feasible solution. y is feasible; moreover y b = c B B 1 b = c x, so by previous part, y solves D. Ming Zhong (JHU) AMS Fall 2018 13 / 16

Consequences of the Previous Theorem Corollary If D is infeasible, P is unbounded of infeasible, and vice versa. Corollary Let x and y be the feasible solution to the primal and dual problems P and D respectively. Then x and y are optimal to P and D if and only if v j x j = 0 for j = 1,, n, where v v 2 = = c A y.. v n v 1 Ming Zhong (JHU) AMS Fall 2018 14 / 16

More on the Second Corollary v is the vector of slack variables in the dual constraints for the dual solution y. This condition is called the complementary slackness condition. The primal and dual solutions are called complementary slack solutions. A given feasible solution of P is optimal if and only if there exists a complementary slack dual feasible solution, and vice versa. Ming Zhong (JHU) AMS Fall 2018 15 / 16

The Proof Proof. Let x and y are primal and dual feasible solutions. We have A x = b, x 0, and A y + v = c, v 0. v is the slack variables to y. Hence, c x b y = v x. When x and y are both optimal, by previous theorem, c x = b y. Thus, v x = 0. Ming Zhong (JHU) AMS Fall 2018 16 / 16