GETTING STARTED INITIALIZATION

Similar documents
AM 121: Intro to Optimization

CPS 616 ITERATIVE IMPROVEMENTS 10-1

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

CHAPTER 2. The Simplex Method

Linear Programming for Planning Applications

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

ECE 307 Techniques for Engineering Decisions

Chapter 4 The Simplex Algorithm Part II

New Artificial-Free Phase 1 Simplex Method

1 Review Session. 1.1 Lecture 2

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

In Chapters 3 and 4 we introduced linear programming

Part 1. The Review of Linear Programming

The Simplex Algorithm: Technicalities 1

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

Optimization (168) Lecture 7-8-9

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Simplex Algorithm Using Canonical Tableaus

The simplex algorithm

IE 400: Principles of Engineering Management. Simplex Method Continued

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

Linear programming on Cell/BE

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Linear Programming Duality

Lecture 5. x 1,x 2,x 3 0 (1)

February 17, Simplex Method Continued

Understanding the Simplex algorithm. Standard Optimization Problems.

Linear Programming: Simplex

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Chap6 Duality Theory and Sensitivity Analysis

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

The Strong Duality Theorem 1

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

The Simplex Algorithm

Finite Pivot Algorithms and Feasibility. Bohdan Lubomyr Kaluzny School of Computer Science, McGill University Montreal, Quebec, Canada May 2001

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Simplex method(s) for solving LPs in standard form

Linear Programming Redux

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

MAT016: Optimization

Review Solutions, Exam 2, Operations Research

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Elimination Method Streamlined

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Optimisation and Operations Research

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Integer Programming and Branch and Bound

Water Resources Systems: Modeling Techniques and Analysis

Linear programs Optimization Geoff Gordon Ryan Tibshirani

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

OPRE 6201 : 3. Special Cases

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Systems Analysis in Construction

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two

Introduction to optimization

UNIT-4 Chapter6 Linear Programming

Linear & Integer programming

AM 121: Intro to Optimization Models and Methods

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

OPERATIONS RESEARCH. Linear Programming Problem

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Lesson 27 Linear Programming; The Simplex Method

Part 1. The Review of Linear Programming

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

Lecture 11: Post-Optimal Analysis. September 23, 2009

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Introduction to Linear Programming

The Simplex Algorithm and Goal Programming

Math 273a: Optimization The Simplex method

MATH2070 Optimisation

Review of Optimization Basics

AM 121: Intro to Optimization Models and Methods Fall 2018

"SYMMETRIC" PRIMAL-DUAL PAIR

Projection, Inference, and Consistency

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

9.1 Linear Programs in canonical form

Convex Optimization Overview (cnt d)

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

An introductory example

CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION

How to Take the Dual of a Linear Program

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

x 4 = 40 +2x 5 +6x x 6 x 1 = 10 2x x 6 x 3 = 20 +x 5 x x 6 z = 540 3x 5 x 2 3x 6 x 4 x 5 x 6 x x

3 The Simplex Method. 3.1 Basic Solutions

Summary of the simplex method

F 1 F 2 Daily Requirement Cost N N N

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T

Lecture slides by Kevin Wayne

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Linear Programming, Lecture 4

Linear Programming and the Simplex method

Transcription:

GETTING STARTED INITIALIZATION 1. Introduction Linear programs come in many different forms. Traditionally, one develops the theory for a few special formats. These formats are equivalent to one another and to any other linear programs in the sense that simple transformations can be used to transform from one format to the other. We will take another approach. It turns out that the ideas we used to develop the simple method for dictionaries can be easily etended to the most general cases. To connect to the traditional approaches in our tet and other books, I introduce the special formats used by Chvátal and those used by Dantzig. We then illustrate the techniques that are used to convert between formats. Then, before we deal with the general case, we recreate Chvátal s elegant way to initialize the simple method. I then make some general remarks on auiliary problems. Then we etend the dictionary concept so that it can deal with the most general LPs. We finish up with several eamples, to illustrate how easy it is to etend the dictionary method. 1. Special Formats for Linear Programs 1.1 Chvátal s Formats Chvátal, especially in the first seven chapters of his book, uses two special formats: The dictionary format, which we have already encountered; and his standard format that is of the form: Getting Started Page 1/13 4:41 PM, 9/14/03

Maimize subect to z= c A b 0 Here, c is a n-vector, is an n-vector, A is mn, and b is a m- vector. This can easily be converted into a dictionary by introducing slack variables that represent the differences between the left and right hand sides of A b. If b 0, the dictionary is feasible. 1.2 Dantzig s Formats Dantzig [1963] defines three special formats, the canonical form, Dantizig s standard form (which is different from Chvátal s, and the inequality form. I will generally call the inequality form the symmetric form for reasons that will become clear when we discuss duality. The three formats in matri notation are: The Symmetric Form: Minimize z= c subect to A b 0 The Canonical Form: Minimize z= c subect to A+ Iy= b 0, y 0 Dantzig s Standard Form: Getting Started Page 2/13 4:41 PM, 9/14/03

Minimize subect to z= c A= b 0 1.3 Transforming Devices Several devices or tricks allow one to move from one format to another. In addition sometimes an auiliary problem has to be solved. We discuss this in the net section. Here are some devices : a = b <=> a b, and -a -b Converts equalities to inequalities a b <=> a + s = b, s 0 Converts inequalities to equalities a b <=> a - s = b, s 0 Converts inequalities to equalities (free, not sign restricted) <=> + - -, + 0, - 0 Converts unsigned variables to signed ones 2. Chvátal s Initialization Method We follow Chvátal pp. 39-43 here ecept we use minimization. Suppose we start with a (Chvátal) standard form: Getting Started Page 3/13 4:41 PM, 9/14/03

minimize n c =1 n i, =1 subect to a b ( i= 1,2,..., m) i 0 ( = 1,2,..., n) Making the usual transformation, we easily obtain a dictionary. If b 0, the dictionary is feasible and we can start the simple method. What we eamine here is what we do, if we don t have b 0. We have two questions to deal with. Is there a feasible solution, and, more particularly, is there a basic feasible solution? Chvátal suggests the following auiliary problem. minimize 0 n subect to a b ( i= 1,2,..., m) =1 z= i, n = 1 0 0 ( = 0,1,2,..., n) c i This auiliary problem always has a feasible solution by taking 0 sufficiently large. The original problem is feasible if and only if the auiliary problem has a optimal value 0 = 0. To illustrate, let us consider the following numerical eample: Getting Started Page 4/13 4:41 PM, 9/14/03

min z = 1 +2-3 s.t. 2 1 2 +23 4 21 3 2 +3 + -2 1,, 0 1 2 3 1 2 3 The corresponding auiliary problem is: min 0 s.t. 2 1 2 +23 0 4 21 3 2 +3 0 1 +2 23 0 1 z 1 + 2 3 = 0,, 0 1 2 3 Introducing the slack variables 4,, 6 0 leads to the following dictionary: = 4 2 + 2 + = 2 + 3 + = 1 + 2 + z = 0 1 + 2 3 w = 0 4 1 2 3 0 1 2 3 0 6 1 2 3 0 0 which is infeasible, but if we increase 0 to the most negative value, and pivot out of the basis and 0 out, we get a feasible dictionary. Getting Started Page /13 4:41 PM, 9/14/03

4 = 9 22 3 + 0 = + 21 32 + 3 + 6 = 4 + 31 42 + 33 + z = 0 1 + 2 3 w = + 2 3 + + 1 2 3 The only choice we have to decrease w is to increase 2. At 2 = 1, 2 drives 6 out of the basis leading to the dictionary: 4 = 7 1.1 + 0.6 2.3 + 0. 0 = 2 0.21 + 0.76 1.23 + 0.2 2 = 1 + 0.71 0.26 + 0.73 + 0.2 z = 1 0.21 0.26 0.23 + 0.2 w = 2 0.2 + 0.7 1.2 + 0.2 1 6 3 We have a choice of increasing 1 or 3. We choose 3 because it s coefficient is the most negative. 0 is driven out of the basis, yielding the directory: 4 = 3 1 6 + 20 3 = 1.6 0.21 + 0.66 0.80 + 0.2 2 = 2.2 + 0.61 + 0.26 0.60 + 0.4 z = 0.6 0.21 0.46 + 0.20 + 0.2 w = 0 We have a dictionary with w = 0 = 0. So the current dictionary is feasible for the original problem. We simply remove the row for w Getting Started Page 6/13 4:41 PM, 9/14/03

and all references to 0, and continue by applying the simple algorithm to minimize z. In the general case, if and when we get to the point where w = 0 = 0 first happens, then, although there may be other variables besides 0 that we can make non-basic, we make sure that we make 0 non-basic. In this way we guarantee that 0 will be nonbasic at the end, and can ust be eliminated from the dictionary when we start to optimize the original obective. Now we are in a position to state: Fundamental Theorem of Linear Programming: Every linear program has the following properties: (i) (ii) (iii) If it has no optimal solution, then it is either infeasible or unbounded. If it has a feasible solution, then it has a basic feasible solution. If it has an optimal solution, then it has a basic optimal solution. Proof: The auiliary problem results in an optimal solution (note: this assumes no degeneracy; we still need to deal with that) because the auiliary obective function is bounded below by 0. If the optimal value > 0, the original problem was infeasible. If the optimal value is 0, we can then apply the simple algorithm to the original obective. This results in a family of feasible solutions with the obective unbounded below, or an optimal solution which is basic. Besides the problem with degeneracy, we also need to show that every linear program can be written in Chvátal s standard form. But basically, you do this with the transformations discussed previously. 3. Overview of Auiliary Problems Let us consider a linear program in canonical form: Getting Started Page 7/13 4:41 PM, 9/14/03

Min s.t. z= c A+ Iy= b 0, y 0 The basic solution y = b is not feasible unless b 0. The basic solution we have on hand is y = b, = 0. The problem is that (y,) E n, the non-negative orthant. So we want to reduce the distance from the basic solution to E n by changing the basis. How do we measure the distance? Traditionally, one would use the euclidean norm (square root of the sum of squares). But if we try other measures we can use the simple method for this problem as well as the original one. L p norms generalize the euclidean norm. p p = p i. The three most common values of p are 1, 2, and. The L 1 norm is given by = 1 i which would measure the sum of 2 the errors. The L 2 or euclidean norm is given by = 2 i. The L norm as the limit of L p as p ->, is given by ma, which i would measure the worst error. The L 1 and L norms lead to linear programs. The L 2 norm does not. Traditionally, the L 1 norm is used in auiliary problems. Chvátal, however, uses the L approach. We will use the L 1 approach because it is more fleible, and, I believe, more effective. But L is elegant in the simple case. We shall come back to these ideas in Chapter 14 of Chvátal. 4. Etended Dictionaries Linear programming problems can be epressed in many equivalent forms. A form that is convenient for our application, and is among the most general is: Mininimize z = c bl A bu l u for = 1,..., n = i Getting Started Page 8/13 4:41 PM, 9/14/03

Here A is a given m n matri, is an n-vector of decision variables, each with given lower bounds l and upper bounds u. The m-vectors b l and b u are given data that define constraints. With an appropriate implementation, the simple method takes little more work to solve the etended format than the simpler forms given in tetbooks. Obviously, minimizing z is equivalent to maimizing -z. Thus, maimization problems can be handled using the same methods as minimization problems. The lower bound, l, may take on the value - and the upper bound, u, may take on the value +. Similarly, some or all of the components of b l may be -, and some or all of b u may be +, this gives rise to the following terminology for variables: l u Variable Type finite, less than u finite, greater than l bounded finite + lower bounded - finite upper bounded - + free equals u equals l fied Table 1: Variable Types Similar, but somewhat different terminology is used for the constraints: b i l finite, less than b i l b i u finite, greater than b i l Constraint Type range Getting Started Page 9/13 4:41 PM, 9/14/03

finite + lower bounded - finite upper bounded - + relaed equals u equals l equality Table 2: Constraint Types The simple algorithm of George Dantzig resolves this linear programming problem in a finite number of iterations resulting in one of the three following outcomes: (i) (ii) An optimal solution to the problem A linear class of solutions which satisfy the constraints and which give values to z that are unbounded below, or (iii) A demonstration that the constraints have no solution (independent of z values) We now replace the constraints, b l A b u by: y= A bl y bu for i= 1,..., m i i i This leads us to: Minimize z n = c = 1 n i i = 1 Subect to y = a ( i = 1,2,..., m) l u for = 1,..., n bl y bu for i= 1,..., m i i i This equation together with an assignment of values to the nonbasic variables is a variant of the dictionary representation of Strum and Chvátal [Chvátal, 1983]. The dictionary is said to be Getting Started Page 10/13 4:41 PM, 9/14/03

feasible for given values of the independent (non-basic) variables 1,, n if the given values satisfy their bounds and if the resulting values for the dependent (basic) variables y 1,..., y m satisfy theirs. If a dictionary, feasible or not, has the property that each non-basic variables is either at its upper bound or its lower, then the dictionary is said to be basic. Suppose our dictionary besides being feasible has the following optimality properties, (i) for every non-basic variable that is strictly above its lower bound we have c 0, and (ii) for every non-basic that is strictly below its upper bound we have c 0. Such a dictionary is said to be optimal. It is easy to see that no change in the non-basic variables will increase z and hence the current solution is optimal. Starting with a feasible dictionary, the standard simple method involves a sequence of feasible dictionaries. Each iteration consists of three steps: 1. Select Column: Choose a non-basic variable, s, which violates one of the two optimality properties. Such a non-basic variable is said to be eligible. There may be many such; there are also several criteria for choosing the non-basic variable. We will discuss some shortly. If there is no such variable the current solution is optimal. In this latter case we stop with an optimal solution. 2. Select Row: Increase the non-basic variable if the first optimality condition was violated (decrease the non-basic variable if the second optimality condition was violated) until the non-basic variable or one of the basic variables reaches its bound. If there is no limit to the change you can make in the non-basic variable, s, the value of z is unbounded below and continuing to change s will result in Getting Started Page 11/13 4:41 PM, 9/14/03

ever decreasing values of z. We then terminate with a class of feasible solutions with the obective unbounded below. 3. Pivot: If the non-basic variable reaches its bound, then the net dictionary is determined by all the non-basics remaining the same ecept for s which is set at the bound it reached. The basic variables are adusted accordingly. If a basic variable y r starts to eceed its bound before s reaches its bound then s and y r echange their roles; that is, y r becomes non-basic and s becomes basic. This is accomplished by a pivot step. The result is the net dictionary. It can be demonstrated that this process terminates in a finite number of dictionaries; however, there is a technical difficulty. The basic idea in showing finiteness is that each iteration will result in an increase in z so that dictionaries cannot be repeated. It is fairly easy to see that only a finite number of different dictionaries can result from a given basic, feasible, initial dictionary and thus there must be a finite number of iterations in the simple method. The difficulty occurs in Step 2 in the case where some basic variables are already at one of their bounds so that we cannot change s at all. This is called degeneracy. This can be handled (see Chvátal[1983, Chapter 3] for a general discussion, and [Gill et al, 1989] for the specific techniques we use) but takes us afield. Another important observation is that there may be many eligible choices for s but as far as the finiteness goes any one can be chosen. In practical implementations however the method (column choice rule) used to choose among the eligible non-basic variable can be critical in determining performance. We will come back to this point later. For now we will assume we use the original method used by Dantzig, which is to take the eligible column with the largest absolute value of c s Getting Started Page 12/13 4:41 PM, 9/14/03

. Eamples We will do some eamples in class. References: Chvátal, Vasek, Linear Programming, Freeman, 1983, pp. 39-43 Dantzig, George, Linear Programming and Etensions, Princeton University Press, 1963. Getting Started Page 13/13 4:41 PM, 9/14/03