A Concise Pivoting-Based Algorithm for Linear Programming

Size: px
Start display at page:

Download "A Concise Pivoting-Based Algorithm for Linear Programming"

Transcription

1 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August 204 DOI: /ijams A Concise Pivoting-Based Algorithm for Linear Programming Zhongzhen Zhang *, Huayu Zhang 2 School of Management, Wuhan University of Technology, Wuhan, China 2 Institute of Applied Informatics and Formal Description Methods, Karlsruhe Institute of Technology, Germany * zhangzz32@26.com; 2 huayu.zhang@kit.edu Abstract This paper presents a pivoting-based method named pivoting algorithm for linear programming. In contrast to the simplex method this one never requires additional variables added into the problem. For a problem of m general constraints in n variables the size of the table for computation is (m + ) (n + ). Generally speaking it is much smaller than the table used by simplex method even though these two methods are equivalent to each other. Keywords Linear Programming; Pivoting Operation; Positive Basic Cone Introduction Linear programming (LP) is the fundamental branch of optimization and is widely used in economics, management and engineering. The most well-known method for solving LP is the simplex method. However this method is not good enough. We see that when a LP problem is solved by simplex method it must be transformed into a standard form where every variables are nonnegative and other constraints are equalities. According to simplex method, if a problem has a general inequality constraint (containing at least two variables), it must be changed into equality by introducing a slack or surplus variable; if the lower bound of a variable is not zero, it has to be changed into zero; and if a variable is free, it is replaced with two nonnegative variables. Moreover artificial variables are often required so that the formal computing can be conducted. These preliminary procedures make the original problem to be a big one and increase the computational burden. The simplex method is a column-processing method where the entering and leaving vectors are columns of the coefficient matrix of equality constraint. It binds up all the general constraints and is difficult to find redundant constraints in the iteration process. This paper presents another kind of pivoting-based method for solving LP. This method never requires additional variables. For a problem of m general constraints in n variables the size of the table for computation is just (m + ) (n + ). It is a rowprocessing method where the entering and leaving vectors are coefficient vectors of constraints. When a problem of k independent equality constraints in n variables is solved by this method, all the k equalities are first deleted by pivoting operations to result in a problem which has only inequality constraints in n k variables. And some redundant constraints may be easily detected and deleted in the iteration process. This paper is organized as follows. In Section we introduce a kind of pivoting operation. In Section 2 we present the pivoting algorithm for LP. In Section 3 we discuss the relationship between the pivoting algorithm and the simplex algorithm. In Section 4 we show how to obtain the optimal solution of a linear program by solving the dual of this problem. In Section 6 we give some concluding remarks. Pivoting Operation for Linear Programming We formulate linear programming in this form min z = c x s.t. ai x = bi, i =,,l, ai x bi, i = l+,,m. () Where c is an n-dimensional row vector, x is an unknown n-dimensional column vector, ai is an n- dimensional row vector, and bi is a scalar, i =,,m. In our method the following concepts are employed. A set of maximum linearly independent vectors of a,,am is called a basis of (). Vectors in the basis are called basic otherwise called nonbasic. The equalities and inequalities associated with basic vectors are called basic, otherwise called nonbasic. The system formed by basic equalities and inequalities is called a basic system and whose solution set is called a basic cone. The solution to the system of equations that are corresponding to basic (in)equalities, i.e., the vertex of the basic cone, is called basic solution. 88

2 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August For a basic cone of (), let I0 and I be the index sets for basic equalities and basic inequalities respectively. Suppose that c is a linear combination of basic vectors as following: c = w a 0 j j If w0j 0 for every, the basic cone is called positive basic cone. Geometrically, our method for solving () begins with a positive basic cone whose vertex is denoted by x (). If x () is feasible, then it is optimal for () by Karush- Kuhn-Tucker (KKT) conditions. Otherwise there exists a violating constraint say ar x br such that ar x () < br. If the half space ar x br is inconsistent with the cone, there is no feasible solution. Otherwise x () is projected along a particular edge of the cone onto the boundary ar x = br of the half space ar x br to constitute a new positive basic cone with a value of the objective function not less than cx (). Then the same process repeats. Algebraically, the iteration is accomplished by the following pivoting operation. For a basic cone of () with vertex x (), let I0, I, I2 and I3 be the index sets for basic equalities, basic inequalities, nonbasic inequalities and nonbasic equalities respectively. Suppose that a c = w a, (2) i ij j 0 j j = wa, i I32. (3) If wrs 0 for r I32 and s I, we solve as from the rth expression of (3) to have a = (/ w ) a + ( w / w ) a, (4) s rs r \{ } rj rs j 0 s Substitute it into (2) and other expressions of (3) to have c = (w0s / wrs)ar + [ w ( w / w ) w ] a, (5) \{ s} 0j 0s rs rj j a = ( w / w ) a + [ w ( w / w ) w ] a, i is rs r \{ } ij is rs rj j 0 s i I32\{r}. (6) Now we have a new basic cone whose index set is {r}0\{s} and whose vertex is denoted by x (2). Multiply (6) through by x (2) on the right hand side to have (2) (2) (2) i = ( is / rs ) r + [ ij ( is / rs ) rj ] j \{ s} a x w w a x w w w w a x = ( w / w ) b + [ w ( w / w ) w ] b is rs r \{ } ij is rs rj j 0 s = ( w / w ) b + [ w ( w / w ) w ] b. is rs r ij is rs rj j Multiply (3) through by x () on the right hand side to have Therefore () () i = ij j = ij j a x w a x w b ai x (2) ai x () = ( w / w ) b ( w / w ) w b is rs r is rs rj j = ( w / w )( b wb) = (wis / wrs)(br ar x () ). Rewrite it as is rs r rj j ai x (2) bi = ai x () bi (wis /wrs)(ar x () br). Letting σi = aix () bi, σr = arx () br and σi = ai x (2) bi, we have σi = σi (wis /wrs)σr,i I32\{r}. Similarly, from (5) and (2) we have or c x (2) c x () = (w0s / wrs) (br ar x () ) f0 = f0 (w0s / wrs)σr where f0 = c x () and f0 = c x (2). On the other hand from (4) we can obtain Let σs =as x (2) bs to have as x (2) bs = (/wrs)(ar x () br). σs = σr /wrs. The above operational process is called a pivoting (operation), wrs is called pivot, the row of wrs is called pivot row and the column of wrs is called pivot column. We say ar enters and as leaves the basis. The exchange of these two vectors is denoted by ar as. The pivoting process is simply represented by tables and 2. TABLE. INITIAL TABLE as aj c w0s w0j f0 ar wrs * wrj σr ai wis wij σi TABLE 2. RESULT OF THE PIVOTING ar c w0s/wrs w0j f0 as /wrs wrj /wrs σr /wrs ai wis /wrs wij σi aj 89

3 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August 204 where w0j = w0j (w0s / wrs)wrj, 0\{s}; wij = wij (wis / wrs)wrj, i I32\{r}, 0\{s}; σi = σi (wis / wrs)σr, i I32\{r}, f0 = f0 (w0s / wrs)σr. The matrix (wij ) for i I32\{r} and 0\{s} is called residual matrix with respect to wrs. For Table, σi = ai x () bi is called the deviation of the nonbasic vector ai or the associated constraint with respect to x () and w0j is called the (reduced) cost of the basic vector aj or the associated constraint. If w0j 0, σi = ai x () bi = 0, i I3 σi = ai x () bi 0, i I2 then x () is the optimal solution. For an r I2 if σr = ar x () br< 0, we say ar x br to be a violating inequality against x () and say ar x () br / ar 2 to be the distance from x () to ar x br. Pivoting Algorithm for Linear Programming Principle of Pivoting Algorithm As mentioned above the algorithm for solving () begins with a positive basic cone whose vertex is supposed to be x () and the index sets for basic equalities, basic inequalities, nonbasic inequalities and nonbasic equalities are I0, I, I2 and I3 respectively. For a nonbasic inequality ar x br (r I2), if ar x () < br, i.e. the deviation σr = ar x () br < 0, then ar is a candidate to enter the basis. Suppose a = r rj j wa as before. If wrj 0 for any, then () has no feasible solution. The reason is as follows. Since ar = 0wa rj j where wrj 0 for any, by Farkas lemma (Bazarra & Shetty,979), any solution x of the system of inequalities aj x = bj, 0; aj x bj, satisfies ar x ar x () therefore satisfies ar x ar x () < br and not satisfies ar x br. Hence () has no feasible solution. Suppose that c = where w0j 0 for any. w a 0 j j If σr = ar x () br < 0 and ar is entering the basis, we will determine a pivot by the minimal ratio w0s / wrs = min{w0j/wrj wrj > 0, } which ensures the new basic cone to be positive as well. Because a pivoting on wrs yields c = (w0s / wrs)ar + [ w ( w / w ) w ] a. \{ s} 0j 0s rs rj j in which all the coefficients associated with the index set {r}\{s} are nonnegative. Also we have cx (2) cx () = (w0s / wrs)(br ar x () ) 0. where x (2) is the vertex of the new positive basic cone. It implies that the value of the objective function is non-decreasing and is increasing if the cost w0s of the leaving vector is positive. Now suppose that a nonbasic equality ar x = br (r I3) is entering the positive basic cone with vertex x (). Since ar x = br is equivalent to two inequalities ar x br and ar x br, if ar x () < br and wrj 0 for any, or ar x () > br and wrj 0 for any, then () has no feasible solutions. Otherwise we determine the pivot wrs as follows. If σr = ar x () bi < 0, wrs satisfies w0s / wrs = min{w0j / wrj wrj > 0, }, and if σr = ar x () bi > 0, wrs satisfies w0s / wrs = max{w0j / wrj wrj < 0, }. If σr = ar x () bi = 0 and wrj = 0 for any, then ar x = br is a redundant equality constraint. Because in this case ar is a linear combination of coefficient vectors of the system of equations aj x = bj (0) and br is the same linear combination of bj (0). Computational Steps of Pivoting Algorithm With above explanation we formally state the algorithm for () as follows. Algorithm. Pivoting algorithm for (). Step. Initial step. Let c = (c,,cn), x = (x,,xn) T, ej be the jth row of the identity matrix of order n and M be a number large enough. For j =,,n, if cj 0 and there is a constraint in the form of xj lj, let xj lj be a basic inequality otherwise let xj M be a basic inequality, that is ej is a basic vector and xj (0) = lj or M; if cj < 0 and there is a constraint in the form of xj uj, then let xj uj be a basic inequality otherwise let xj M be a basic inequality, that is ej is a basic vector and xj (0) = uj 90

4 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August or M. Where xi M and xi M are called artificial inequalities. Other constraints are nonbasic whose deviations are σi = ai x (0) bi, i =,,m, where x (0) = (x (0),, xn (0) ) T. Thus the initial table is constructed as shown by Table where the index sets for basic equalities, basic inequalities, nonbasic inequalities and nonbasic equalities are I0, I, I2 and I3 respectively. Step 2. Preprossing, put nonbasic equalities into the basic system as many as possible. (i) If I3 = Ø, goto Step 3. Otherwise for an r I3, when the deviation of ar is negative, positive or zero, goto (ii), (iii) and (iv) respspectively. (ii) (a) If wrj 0 for any, () has no feasible solution, stop. Otherwise (b) select a basic inequality as x bs such that w0s / wrs = min{w0j / wrj : wrj > 0, }. Carry out a pivoting on wrs, let I3 := I3\{r}, I0 := I0 {r}, I := I\{s}, I2 := I2 {s}, return to (i). (iii) (a) If wrj 0 for any, () has no feasible solution, stop. Otherwise (b) select a basic inequality as x bs such that w0s / wrs = max{w0j / wr : wrj < 0, }. Carry out a pivoting on wrs, let I3 := I3\{r}, I0 := I0 {r}, I:=I\{s}, I2 := I2 {s}, return to (i). (iv) If there is wrj > 0 for an, go to (ii) (b); otherwise if there is wrj < 0 for an, go to (iii) (b); otherwise let I3 := I3\{r} and return to (i). Step 3. Main iterations, exchange nonbasic inequalities and basic inequalities. (i) If all the deviations of nonbasic vectors are nonnegative, the current basic solution is optimal, stop. Otherwise (ii) select a nonbasic vector ar (r I2) with a negative deviation to enter the basis. If wrj 0 for any, there is no feasible solutions, stop. Otherwise let a basic inequality as x bs leave the basis that satisfies w0s / wrs = min{w0j / wrj wrj > 0, }. Carry out a pivoting on wrs, let I2 := I2\{r} {s} and I := I\{s} {r}, return to (i). Some remarks are given as follows. Reamrk. The simplification of the table Sometimes a linear program contains constraints in the form bi ai x bi, especially in the form ui xi li, which are formally written as ai x bi and ai x bi in our method. Suppose that the basic solution is x, then deviations of ai and ai are ai x bi and ai x + bi respectively. The sum of these two deviations is bi bi. If one of ai and ai is basic, the deviation of the other one is bi bi > 0 therefore can be ignored. If both ai and ai are nonbasic, since the coefficients in their expressions in terms of basic vectors are just opposite in signs, these two vectors can share one row in a table. The purpose to transfer nonbasic equalities into the basis in the preprocessing stage is to make the basic solution satisfy all the equality constraints. Once an equality enters the basic system it never leaves the basis. For this we never choose a pivot in the column of basic equalities. Therefore the columns of basic equalities can be eliminated. Reamrk 2. The identification of redundant inequality constraints If the deviation of a nonbasic vector ar is nonnegative and wrj 0 for any, then ar x br is redundant by Farkas lemma where I is the index set of basic inequalities. Similarly, if the deviation of ar is nonpositive and there is only one s I such that wrs > 0, then as x bs is redundant, because a pivoting on wrs yields the above case. Reamrk 3. The obtainment of basic solution Suppose x = ( x,, x n ) T is the basic solution. If xj lj or M is basic, x j = lj or M, i.e., x j equals to the lower bound of xj; if xj uj or M is basic, then x j = uj or M, i.e., x j equals to the upper bound of xj. While xj lj or M is nonbasic, since its deviation is σj = ej x li = x j lj or x j + M, hence x j = σj + lj or σj M, i.e., x j equals to the deviation of ej plus the lower bound of xj; if ej is nonbasic, we see that x j equals to the upper bound of xj minus the deviation of ej. For the final table, only when every component of the basic solution x is finite, i.e. not equal to M or M, x is the optimal solution, otherwise no optimal solution for (). Reamrk 4. Rules for the iteration Like simplex method, there are many rules to select entering and leaving vectors for iterations. Suppose I and I2 are index sets for basic inequalities and nonbasic inequalities respectively and σi is the deviation of ai x bi with respect to the current basic solution. Three rules for iterations are given below. Rule. (The smallest deviation rule) Among nonbasic inequalities with negative deviations, select a nonbasic 9

5 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August 204 inequality with the most negative deviation to enter the basic system, i.e., if σr = min{σi : σi <0, i I2}, then ar enters the basis. Rule 2. (The largest distance rule) Among nonbasic inequalities with negative deviations, select a nonbasic inequality farthest from the current basic solution to enter the basic system, i.e., if σr / ar 2 = min{σi / ai 2 : σi < 0, i I2}, then ar enters the basis. Rule 3. (The smallest index rule) Among nonbasic inequalities with negative deviations, select a nonbasic inequality with the smallest index to enter the basic system; among basic inequalities taking the minimal ratio, select a basic inequality with the smallest index to leave the basic system, i.e., if r = min{i I2 : σi < 0}, then ar enters the basis; if s = min{k : w0k / wrk = θ, k I} where θ = min{ w0j / wr : wrj > 0, }, then as leaves the basis. Like Bland s anti-cycling rule (Chvatal, 983), it can be proved (Zhang, 2004) that when the smallest index rule is applied, cycling will be avoided. A Numerical Example Let us show how to solve a linear program by Algorithm. Example. Solve the linear program Solution. Let min z = 7x + 7x2 3x3 + x4 s.t. x 2x2 x3 + x4 = 7, 2x + x2 3x3 x4 = 2, x 4x2 2x3 x4 4, 2x + x2 + x3 + 2x4 2, x 0, x2, x3 5, x4 free. a = (, 2,, ), b = 7, a2 = (2,, 3, ), b2 = 2, a3 = (, 4, 2, ), b3 = 4, a4 = (2,,, 2), b4 = 2, e = (, 0, 0, 0), l = 0, e2 = (0,, 0, 0), l2 =, e3 = (0, 0,, 0), l3 = 5, u3 =. Since x4 is free and the coefficient at x4 of the objective function is positive, we introduce an artificial inequality x4 M into the problem and let e4 = (0, 0, 0, ), l4 = M. In order to represent the coefficient vector of the objective function to be a nonnegative linear combination of basic vectors, we let e, e2, e3, e4 be initial basic vectors. Correspondingly, the initial basic system is x 0, x2, x3, x4 M and the initial basic solution is x (0) = (0,,, M) T with a value M 0 of the objective function. Vectors a, a2, a3, a4 and e3 are nonbasic with deviations σ = a x (0) b = M 6, σ2 = M 6, σ3 = M 2, σ4 = 2M 2 and ( 5) = 6 respectively. Table 3 is the initial table where the inequality constraint x3 5 is temporarily ignored since x3 is basic. TABLE 3. INITIAL TABLE e e2 e3 e4 σi c M 0 a 2 * M 6 a2 2 3 M 6 a3 4 2 M 2 a M 2 Since a is the coefficient vector of an equality constraint, a enters the basis first. Since the deviation of a is σ = M 6 < 0 and min {w0j / wj : wj > 0, } = min {7/, 3/, /} =, e4 leaves the basis and the entry with an asterisk is the pivot. Table 4 gives the result of pivoting where the column of a is eliminated. TABLE 4. RESULT OF THE FIRST PIVOTING e e2 e3 σi c e4 2 M+6 a2 3 4 * 22 a a In Table 4 let a2 enter and e3 leave the basis to yield Table 5 in which the coefficient vector e3 of x3 has become a nonbasic vector. It is the time to consider x3 5. The deviation of x3 5 is equal to u3 l3 /2 = 6 /2 = /2 which is listed in the last column of Table 5. TABLE 5. RESULT OF THE SECOND PIVOTING e e2 σi σ i c 9/2 9/2 7 a /4 7/4 M+/2 - e3 3/4 /4 /2 /2 a3 9/4 2/4 7/2 - a4 9/4 * 7/4 3/2 - The unique negative deviation in Table 5 is 3/2 taken by a4 therefore a4 enters the basis. Since min { 9 / 2, 9 / 4 9 / 2 } = 2 7 / 4 taken by e, e leaves the basis. A pivoting on 9/4 gives Table 6. 92

6 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August TABLE 6. RESULT OF THE THIRD PIVOTING a4 e2 σi σ i c 2 20 e4 /9 20/9 M 2/9 - e3 /3 5/3 0/3 8/3 a3 2 - e 4/9 7/9 26/9 - Up till now all the deviations of nonbasic vectors are nonnegative, the iteration process is finished. Since e2 is basic, x2 = l2 =. Since e, e3 and e4 are nonbasic, x = 26/9 + l = 26/9 + 0 = 26/9, x3 = u3 0/3 = 0/3 = 7/3, and x4 = M 2/9 + l4 = M 2/9 + ( M) = 2/9. Therefore the optimal solution is x = (26/9,, 7/3, 2/9) T with a value 20 of the objective function. Relationship between Pivoting Algorithm and Simplex Algorithm The Simplex Algorithm Simplex method was proposed by G. B. Dantzig in 947. In the past decades it has been the most frequently employed method for solving linear programming. Before linear programs are solved by the simplex algorithm, preliminary procedures have to be done as follows. (i) Whenever a problem contains general inequality constraints, these constraints are transformed into equalities by introducing slack or surplus variables. (ii) Whenever a problem contains free variables, each of which is be replaced with two nonnegative variables. (iii) Whenever the lower bound of a variable is not zero, this variable is replaced by a variable with zero as its lower bound. In this way a standard form is constructed as following max z = cx s.t. Ax = b, x 0. (7) where c is an n-dimensional row vector, x is an unknown n-dimensional column vector, A is an m n matrix, b is an m-dimensional column vector and m < n as in usual. The fundamental concepts in simplex method are defined upon the standard form as follows where the rank of A is supposed to be m. A set of m linearly independent columns of A is called a basis of (7). Columns of A in the basis are called basic, otherwise called nonbasic. The full rank matrix formed by basic vectors is called a basis matrix. Variables associated with basic vectors are called basic, otherwise called nonbasic. The solution to Ax = b by letting nonbasic variables be zero is called a basic solution. If every component of the basic solution is nonnegative, it is called a basic feasible solution and the basis is called a feasible basis. In order to initiate the formal calculation, the standard form (7) is needed to be further transformed into the following form called canonical form. max z = z0 + σdxd s.t. xb + WxD = w0, xb 0, xd 0 (8) where xb is a column vector formed by m components of x, xd is formed by other components of x, σd is an (n m)-dimensional row vector, W is an m (n m) matrix and w0 is an m-dimensional column vector. The canonical form explicitly determines a basic solution xb = w0 and xd = 0 called current basic solution. In the computational process of (8) by simplex algorithm, all the components of the right hand side term of equality constraint keep nonnegative, but some coefficients of nonbasic variables of the objective function are positive. As soon as all the coefficients of the objective function are nonpositive, the current basic solution is optimal. Let I and I2 be index sets for basic and nonbasic variables respectively. The component of σd associated with the nonbasic variable xi is denoted by σi and the column of W associated with xi is denoted by wi. The components of w0 and wi associated with a basic xj are denoted by w0j and wij respectively. The simplex algorithm for the canonical form (8) is carried out as follows where w0 0. Step. If σd 0, the current basic solution is optimal, stop. Otherwise select a nonbasic variable xr (r I2) associated with the largest component of σd as the entering variable. Step 2. If the column wr 0 that is associated with xr, (8) is unbounded from above, stop. Otherwise if w0s / wrs = min{w0j / wrj : wrj > 0, }, then xs is the leaving variable. Step 3. Carry out a pivoting on wrs, that is to carry out elementary row transformations to convert the current canonical form into a new canonical form where the 93

7 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August 204 index sets for basic and nonbasic variables are {r}\{s} and {s}2\{r} respectively. Then return to step. A variation of the simplex algorithm is dual simplex algorithm. Applying this method to (8), all the coefficients of nonbasic variables of the objective function keep non-positive and some components of the left hand side term of the equality constraint are negative. Once every components of the left hand side term of the equality constraint become nonnegative, i.e., the basic solution becomes feasible, it is the optimal solution. Correspondence between Two Algorithms We shall show that the solving of the standard form by simplex algorithm is corresponding to the solving of its dual by pivoting algorithm. Let A = (p,,pn) in the standard form (7), then the dual of (7) is where y = (y,,ym). min f = yb s.t. ypi ci, i =,,n (9) Without loss of generality, suppose the matrix B = (p,,pm) formed by the first m columns of A is a basis in line with simplex method. The matrix formed by nonbasic vectors is denoted by D. Correspondingly, partition c into two vectors cb and cd and partition x into two vectors xb and xd. Then (7) is written as max z = cb xb + cd xd s.t. BxB + DxD = b xb 0, xd 0. By using B, it is further written as max z = cb B b + (cd cb B D) xd s.t. xb + B D xd = B b xb 0, xd 0. (0) Denote B b = w0 = (w0,,w0m) T, then b = B (B b) = B w0 = w0 p + + w0m pm. If B b = w0 0, B is a feasible basis of (7), and xb = B b and xd = 0 is a basic feasible solution. On the other hand, the system of ypi ci, i =,,m is a positive basic cone of (9) whose vertex y = cb B is a solution to the system yb = cb, i.e., the system of ypi = ci, i =,,m. Therefore a basic feasible solution of (7) corresponds to the vertex of a positive basic cone of (9). If there is cd cb B D = cd y D 0 also, then xb = B b and xd = 0 is the optimal solution of (7). On the other hand, y is feasible for (9) and is hence optimal. Therefore the criteria for optimal solutions of (7) and (9) are consistent. The coefficient at a nonbasic xi in the objective function of (0) is σi = ci cb B pi = ci y pi, i = m +,, n. Let B pi = wi = (wi,,wim) T, i = m +,,n, then the ith nonbasic vector of (9) can be represented as pi = B (B pi) = B wi = wi p + + wim pm, i = m +,,n. On the other hand, the coefficient at nonbasic xd in the equality constraint of (0) is B D = (wm+,, wn). Suppose B b = (w0,, w0m) T 0. For an r {m +,, n}, if the coefficient σr = cr y pr > 0 of a nonbasci variable xr, then xr is a possible entering variable for (0); and the deviation σr = y pr cr < 0 of ypr cr with respect to y, hence ypr cr is a possible entering inequality for (9). If there is wr = (wr,, wrm) T 0 also, then (0) has an unbounded solution, while (9) has no feasible solution. Otherwise we calculate the minimum ratio min{w0j / wrj : wrj > 0, j {,,m}} to determine a leaving variable for (0) or a leaving inequality for (9). Thus the solving of (7) by simplex is corresponding to the solving of (9) by pivoting algorithm one by one. Dual Solution of Linear Programming For some problem it is more convenient to obtain the optimal solution by solving its dual. Let us describe such a method. Consider linear programming max z = cx s.t. ai x = bi, i =,,l, ai x bi, i = l +,,m, xj 0, j J. () Where c = (c,,cn), x = (x,,xn) T, ai = (ai,,ain), bi is a real number, i =,,m, and J {,,n}. The dual of () is defined by min w = yb s.t. y pj = cj, j {,,n}\j, y pj cj, j J, yl +,,ym 0. (2) Where y = (y,,ym), b = (b,,bm) T and pj = (aj,,amj) T, j =,,n. Theorem. Suppose that x * = (x *,,xn * ) T is the optimal solution of the primal problem () and 94

8 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August c = w0 a + w0 a + w0 ( e ) (3) i i i i j j i E i I j Z where w0i (i I) and w0j (j J) are nonnegative, and E, I, Z are active sets with respect to x *, i.e., E = {i {,,l} : ai x * = bi}, I = {i {l+,,m} : ai x * = bi}, Z = {j {,,n} : ejx * = 0}, where ej is the j th row of the identity matrix of order n. Then yi * = w0i, i E, yi * = 0, i {,,m}\e\i (4) is the optimal solution of the dual problem (2). Conversely, if the dual problem (2) has an optimal solution y * and b = w0jpj + w0jpj + w0iei (5) j E' ' i Z' where w0j ( ) and w0i (i Z ) are nonnegtive and E, I, Z are active sets with respect to y * which are associated with equality constraints, inequality constraints and nonnegative restrictions respectively. Then xj * = w0j, j E, xj * = 0, j {,,n}\e \I (6) is the optimal solution of the primal problem (). Proof. The j th component of (3) is or w a + w a = ya + cj = 0i ij i E i I 0i ij m * = ya i ij i= = y * pj, j {,,n}\z w a + w a w0j cj = 0i ij 0i ij i E i I l n * * = ya i ij + ya i ij i= i= l+ l n * * i ij ya i ij i= i= l+ w0j = y * pj w0j, j Z. Since w0j 0, j Z, the above expression can be written as Besides cj y * pj, j Z. yi * = w0i 0, i I, yi * = 0, i {l +,,m}\i. Therefore (4) is a feasible solution of problem (2). Since ai x * = bi, i E, T ej x * = 0, j Z, multiply (3) through by x * on the right to yield cx * = w0ibi+ w0ibi+ w0 j 0 = i E i I l m * * yb i i + yb i i i= i= l+ j Z = y * b. Therefore (4) is the optimal solution of problem (2). Similarly, (6) can be proved to be the optimal solution of problem (). Example 2. Solve the optimal solution for the linear program max z = 5 x + 4x2 6x3 x4 s.t. x + 2x2 + x3 + x4 8 2x + x2 3x3 x4, x + 4x2 + x3 2, x2, x3, x4 0. Solution. The dual of this problem is min w = 8y + y2 + 2y3 s.t. y + 2y2 y3 = 5, 2y + y2 + 4y3 4, y 3y2 + y3 6, y y2, y, y2, y3 0. Denote the coefficient vectors of the four general constraints by a, a2, a3, a4. Denote the rows of the identity matrix of order 3 by e, e2, e3. The initial table is given by table 7. TABLE 7. INITIAL TABLE e e2 e3 c a 2 * 5 a a3 3 6 a4 Since a is the coefficient vector of the equality constraint, a enters the basis first. The deviation of a is 5 < 0 and min {w0j / wj : w0j > 0, } = min {8/, /2} = /2, e2 leaves the basis. A pivoting on 2 * gives Table 8. TABLE 8. RESULT OF THE FIRST PIVOTING e a e3 c 5/2 /2 5/2 5/2 e2 /2 /2 /2 5/2 a2 3/2 /2 9/2 3/2 a3 5/2 3/2 /2 3/2 a4 3/2 * /2 /2 3/2 95

9 International Journal of Advances in Management Science (IJ-AMS) Volume 3 Issue 3, August 204 In Table 8, deviations of nonbasic a2, a3, a4 are negative. Let us apply the largest distance rule to choose a vector to enter the basis. Since min {σ2 / a2 2, σ3 / a3 2, σ4 / a4 2} = min {.5 / 2,.5 /,.5 / 2 } =.5 / 2, a4 enters the basis. In Table 8 the unique positive entry in the row of a4 is 3/2, e leaves the basis and the constraint y 0 associated with e is redundant. A pivoting on 3/2 gives Table 9 where the row of e is eliminated. TABLE 9. RESULT OF THE SECOND PIVOTING a4 a e3 c e2 /3 /3 /3 2 a2 5 0 a3 5/3 2/3 /3 Up till now all the deviations of nonbasic vectors are nonnegative. Since c = 5a4 +3a +5e3, the optimal solution for the original problem is x = 3, x4 = 5, x2 = x4 = 0 and the value of the objective function is 0. Conclusions This paper presented a pivoting-based method for solving linear programming. This method is novel in that it never requires additional variables to change the form of the original problem and is very easy to use. The advantage of such a method is threefold. One is that it can easily find out redundant constraints and eliminate them in the iteration process. The second one is that it saves the computation of pivoting operation in tableau form, even for integer programming where branch variables or cutting inequalities are frequently added to the problem in the computational process. The third one is that it can be easily converted into graphic algorithms (Zhang, 2004) for many network optimization problems such as the shortest path problem and minimum cost flow problem. Because the pivoting row and column can be directly obtained from the underling graph with a forest or spanning tree so that a pivoting is conducted at very small amount of computation. REFERENCES Bazarra M S, Shetty C M. Nonlinear Programming: Theory and Algorithms. New York: John Wiley & Sons, 979 Chvatal V. Linear Programming, W. H. Freeman and Company, 983 Daili N. Men and progress in linear programming, Journal of Interdisciplinary Mathematics, Volume 4, Issue 2, 20, pages Robert E. Bixby. Solving Real-World Linear Programs: A Decade and More of Progress. Operations Research, Volume 50 Issue, January-February Zhang Zhongzhen. Convex Programming: Pivoting Algorithms for Portfolio Selection and Network Optimization, Wuhan University Press, 2004 (in Chinese). 96

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form 4.5 Simplex method min z = c T x s.v. Ax = b x 0 LP in standard form Examine a sequence of basic feasible solutions with non increasing objective function value until an optimal solution is reached or

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Introduction to Linear and Combinatorial Optimization (ADM I)

Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science The Simplex Method Lecture 5 Standard and Canonical Forms and Setting up the Tableau Lecture 5 Slide 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

SEN301 OPERATIONS RESEARCH I LECTURE NOTES

SEN301 OPERATIONS RESEARCH I LECTURE NOTES SEN30 OPERATIONS RESEARCH I LECTURE NOTES SECTION II (208-209) Y. İlker Topcu, Ph.D. & Özgür Kabak, Ph.D. Acknowledgements: We would like to acknowledge Prof. W.L. Winston's "Operations Research: Applications

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Operations Research Lecture 2: Linear Programming Simplex Method

Operations Research Lecture 2: Linear Programming Simplex Method Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

(includes both Phases I & II)

(includes both Phases I & II) (includes both Phases I & II) Dennis ricker Dept of Mechanical & Industrial Engineering The University of Iowa Revised Simplex Method 09/23/04 page 1 of 22 Minimize z=3x + 5x + 4x + 7x + 5x + 4x subject

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Week_4: simplex method II

Week_4: simplex method II Week_4: simplex method II 1 1.introduction LPs in which all the constraints are ( ) with nonnegative right-hand sides offer a convenient all-slack starting basic feasible solution. Models involving (=)

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.: ,-..., -. :', ; -:._.'...,..-.-'3.-..,....; i b... {'.'',,,.!.C.,..'":',-...,'. ''.>.. r : : a o er.;,,~~~~~~~~~~~~~~~~~~~~~~~~~.'. -...~..........".: ~ WS~ "'.; :0:_: :"_::.:.0D 0 ::: ::_ I;. :.!:: t;0i

More information

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique. IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, 31 14. You wish to solve the IP below with a cutting plane technique. Maximize 4x 1 + 2x 2 + x 3 subject to 14x 1 + 10x 2 + 11x 3 32 10x 1 +

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

(includes both Phases I & II)

(includes both Phases I & II) Minimize z=3x 5x 4x 7x 5x 4x subject to 2x x2 x4 3x6 0 x 3x3 x4 3x5 2x6 2 4x2 2x3 3x4 x5 5 and x 0 j, 6 2 3 4 5 6 j ecause of the lack of a slack variable in each constraint, we must use Phase I to find

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

2.1 THE SIMPLEX METHOD FOR PROBLEMS IN STANDARD FORM

2.1 THE SIMPLEX METHOD FOR PROBLEMS IN STANDARD FORM The Simplex Method I N THIS CHAPTER we describe an elementary version of the method that can be used to solve a linear programming problem systematically. In Chapter we developed the algebraic and geometric

More information

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0. ex-.-. Foundations of Operations Research Prof. E. Amaldi. Dual simplex algorithm Given the linear program minx + x + x x + x + x 6 x + x + x x, x, x. solve it via the dual simplex algorithm. Describe

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

More information

The augmented form of this LP is the following linear system of equations:

The augmented form of this LP is the following linear system of equations: 1 Consider the following LP given in standard form: max z = 5 x_1 + 2 x_2 Subject to 3 x_1 + 2 x_2 2400 x_2 800 2 x_1 1200 x_1, x_2 >= 0 The augmented form of this LP is the following linear system of

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

Sensitivity Analysis and Duality

Sensitivity Analysis and Duality Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

III. Linear Programming

III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

More information

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

CSC Design and Analysis of Algorithms. LP Shader Electronics Example CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Fundamental Theorems of Optimization

Fundamental Theorems of Optimization Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

The Strong Duality Theorem 1

The Strong Duality Theorem 1 1/39 The Strong Duality Theorem 1 Adrian Vetta 1 This presentation is based upon the book Linear Programming by Vasek Chvatal 2/39 Part I Weak Duality 3/39 Primal and Dual Recall we have a primal linear

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small. 0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture Operations Research is the branch of science dealing with techniques for optimizing the performance of systems. System is any organization,

More information

6.2: The Simplex Method: Maximization (with problem constraints of the form )

6.2: The Simplex Method: Maximization (with problem constraints of the form ) 6.2: The Simplex Method: Maximization (with problem constraints of the form ) 6.2.1 The graphical method works well for solving optimization problems with only two decision variables and relatively few

More information

Introduction. Very efficient solution procedure: simplex method.

Introduction. Very efficient solution procedure: simplex method. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

D1 D2 D3 - 50

D1 D2 D3 - 50 CSE 8374 QM 721N Network Flows: Transportation Problem 1 Slide 1 Slide 2 The Transportation Problem The uncapacitated transportation problem is one of the simplest of the pure network models, provides

More information

Chapter 7 Network Flow Problems, I

Chapter 7 Network Flow Problems, I Chapter 7 Network Flow Problems, I Network flow problems are the most frequently solved linear programming problems. They include as special cases, the assignment, transportation, maximum flow, and shortest

More information

Termination, Cycling, and Degeneracy

Termination, Cycling, and Degeneracy Chapter 4 Termination, Cycling, and Degeneracy We now deal first with the question, whether the simplex method terminates. The quick answer is no, if it is implemented in a careless way. Notice that we

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information