1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0,

Size: px
Start display at page:

Download "1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0,"

Transcription

1 A GENERALIZATION OF THE REVISED SIMPLEX ALGORITHM FOR LINEAR PROGRAMMING PING-QI PAN Abstract. Recently the concept of basis, which plays a fundamental role in the simplex methodology, were generalized to include a deficient case by taking advantage of degeneracy. Such effort gained very favorable numerical results with dense implementations in computational experiments. Using such a new basis, in this paper, we generalize the revised simplex algorithm for solving largescale sparse linear programming problems. The new algorithm solves small systems and demonstrates numerical stability, compared to conventional simplex algorithms. Encouraging computational results are reported on a sparse implementation of the algorithm. A code base on it significantly outperformed MINOS 5.3 with a set of 50 Netlib test problems as well as a set of 15 much large real-world problem, including 8 Kennington and 5 BPMPD problems. In particular, it is shown that there is no inevitable correlation between algorithm s inefficiency and degeneracy, contradicting common belief. Key words. large-scale linear programming, revised simplex algorithm, pivot, deficient basis, LU factorization. AMS subject classifications. 65K05, 90C05 1. Introduction. We are concerned with the linear programming (LP) problem in the standard form minimize c T x subject to Ax = b, x 0, (1.1) where the columns and rows of A R m n (m < n), the right-hand side b R m, and the cost c R n are all nonzero, and Ax = b is consistent. We stress that no assumption is made on the rank of A, except for 1 rank(a) m. Theoretically, the problem related to (primal) degeneracy for the simplex algorithm for solving LP problems is that basic variables bearing value zero could lead to zero-length steps, and hence even cycling. A number of finite (cycling-prevent) approaches were proposed from beginning of the simplex algorithm (e.g., [4, 6, 3, 30, 17]). It is will-known however that their computational performance is unsatisfactory. For infinite simplex variants, a non-degeneracy assumption is always made to detor theoretically while degeneracy occurs all the time actually (it is not unusual that a large proportion of basic variables bear value zero). Even so, cycling rarely occurs, and such kind of algorithms have been working very well in practice. The real obstacle to further progress has long been thought stalling, that is, the solution process could be stuck at a degenerate vertex for too long a time before exiting it. This has been initiating various remedies handling stalling from time to time (e.g, [14, 2, 15, 13]). In particular, Leichner, Dantzig and Davis suggested a strictly improving linear programming Phase I algorithm which proceeds without any stalling at all [7]. Recently, on the other hand, efforts were made to take advantage of degeneracy. As degeneracy results from that the right-hand side b is included in a proper subspace of the range of the constraint matrix, it is possible to extend the classical square basis, which plays a fundamental role in the simplex methodology [5], to include a Department of Mathematics, Southeast University, Nanjing, , People s Republic of China (panpq@seu.edu.cn). Project supported by National Natural Science Foundation of China. Submitted to ICOTA, Dec 15, This draft: March 17,

2 2 Ping-Qi Pan rectangular deficient case having less columns than rows. It appears that the use of such a basis is beneficial not only to primal algorithms [20, 21, 22, 26] but to dual ones [23, 24, 25]. In fact, an alternative setting of the dual simplex algorithm described in an earlier paper was pregnant with such an idea [19]. In particular, recently this idea even achieved encouraging progress in large-scale and sparse computations in a dual context [25]. The concept of basis deficiency was first introduced in [20]. Using the QR factorization, two basis-deficiency-allowing (primal) simplex algorithms were described, one of which was a tableau version and another its revision. Amenable to LP problems with small n m relative to m, a new type of primal algorithms using the QR factorization were presented [21], which were then modified using the LU factorization instead [22]. In addition, a novel Phase-1 procedure was described based on a basis-deficiency-allowing simplex algorithm in tableau form using the LU factorization [26]. Dense implementations of tableau versions of such algorithms performed very favorably in computational experiments. In order to solve large-scale sparse linear programming problems, in this paper, we revise the basis-deficiency-allowing simplex algorithm in compact form so that it proceeds without storing a full array at all. The proposed algorithm may be regarded as a direct generalization of the revised simplex algorithm. Allowing basis deficiency, it solves small systems and demonstrates numerical stability, compared to conventional simplex algorithms. A code base on a sparse implementation of it significantly outperformed MINOS 5.3 with a set of 50 Netlib test problems as well as a set of 15 much large real-world problem, including 8 Kennington and 5 BPMPD problems. In particular, it is shown that there is no inevitable correlation between algorithm s inefficiency and degeneracy, contradicting common belief. Preliminaries are first offered in the next section. In section 3, an optimality condition is given. Then, in section 4, a search direction is derived and discussed. In section 5, the algorithm is described. Finally, in section 6, computational results are reported, demonstrating the remarkable behavior of the proposed algorithm. 2. Preliminaries. In this section, we briefly present associated concepts and ideas, and make some assumptions. In the simplex algorithm, basis is defined as a nonsingular m m submatrix from A. This definition runs up against difficulties. First of all, A from a practical model may not have full row rank, and hence the basis is not well-defined. In this case, a common remedy is to introduce logical variables. However, such doing increases the size of the problem. Even A has full row rank, moreover, the right-hand side b can fall into a proper subspace of the range of A, consequently leading to degeneracy, as the associated basic solution then has zero basic components. One of main features of the proposed algorithm is its use of a basis having fewer columns than rows. It is needed to generalize the definition of the classical basis as follows. Definition 2.1 (Basis). A basis is a submatrix consisting of any linearly independent set of columns of A, and whose range space includes b; a nonbasis is one consisting of the remaining columns. As a result, all bases will fall into two categories, as defined as follows. Definition 2.2 (Deficient basis). If the number of columns of a basis is less than the number of its rows, it is a deficient basis; a square basis is a normal basis. The simplex algorithm and its conventional variants work only with normal bases. In contrast, the proposed algorithm proceeds with either of deficient and normal bases.

3 A Generalization of the Revised Simplex Algorithm for Linear Programming 3 The standard but unnatural full row rank assumption on A is therefore no longer needed in our model (1.1). As real-world LP problems are almost always degenerate, or even highly degenerate, it could be expected that a great majority of bases encountered in practice are deficient. Consequently, the proposed algorithm solves smaller linear systems than those solved in conventional simplex algorithms. Moreover, a deficient basis should be well-conditioned, compared with a normal basis. Let B be a basis with s (1 s m) columns and let N the associated nonbasis. Corresponding components of vectors and columns of matrices are termed basic and nonbasic, respectively. Denote by j i the index of the ith column of B, and by k j the index of the jth column of N. Without confusion, denote basic and nonbasic ordered index sets by B and N again, respectively, i.e., B = {j 1,..., j s } N = {k 1,..., k n s } (2.1) For simplicity of expression, components of vectors and columns of matrices will always be rearranged conformably to (B, N). The proposed algorithm will be developed with s < m, if not indicated otherwise. Let an LU factorization of B with row and column exchanges be as follows: P BQ = LU, L = [ L1 L 2 I ], U = [ U1 0 ], (2.2) where P R m m and Q R s s are permutations that balance stability and sparsity, L 1 R s s is unit lower-triangular, and U 1 R s is upper-triangular with nonzero diagonals. For simplicity of exposition, in the sequel, it might be well to assume that P and Q are the identity permutations. Define the transformed right-hand side b by b = L b, b [ b1 0 ], s m s (2.3) where the last m s components of b are zero, since the compatible system Bx B = b is equivalent to Ux B = b. (2.4) If x B is the unique solution to (2.4), therefore, then x = [ x T B, 0T ] T is a solution to Ax = b. Definition 2.3 (Basic solution). The solution to Ax = b resulting from setting x N = 0 is the basic solution associated with B. Although the reduced space is now of n s rather than n m dimension, it is clear that all of the geometry of the underlying polyhedron associated with the conventional LP model is suitable for our case; for instance, a basic feasible solution corresponds to a vertex, and vice-verse, and so on. In the sequel, it is assumed that the basic solution x associated with B is feasible, satisfying B x B = b, x N = 0, x B 0. (2.5) The (primal) feasible solution x, and LU factors will be updated, iteration by iteration, until optimality achieved. It is possible to keep L well-conditioned during initial factorization and subsequent Bartels-Golub-type updates. The package LUSOL is suitable for handling rectangular basis factorizations and updates of such kind [11].

4 4 Ping-Qi Pan 3. Optimality condition. In order to find a dual solution, which exhibits complementary slackness with x, consider the following system: B T y = c B, (3.1) N T y + z N = c N, z B = 0. (3.2) It is clear that there is an unique solution to the preceding if s = m (when B is a normal basis). Generally, there are infinitely many solutions to it in case of s < m; in particular, we have the following one. Lemma 3.1. There exists the following dual solution: [ ] ȳ = L T U T 1 c B, z 0 N = c N N T ȳ, z B = 0. (3.3) Proof. The validity can be easily verified using (3.1) (3.2) along with (2.2). It is then clear that computing ȳ involves the solution of two triangular systems, namely, U T 1 u = c B, and L T ȳ = [u T, 0 T ] T. (3.4) Theorem 3.2. The primal objective value at x is equal to the dual objective value at (ȳ, z). If z N 0, moreover, these are a pair of primal and dual optimal solutions. Proof. By Lemma 3.1 and (2.3), the dual solution (ȳ, z) corresponds to dual objective value g = b T ȳ = b T 1 U T 1 c B. (3.5) By (2.5), (2.2) and (2.3), on the other hand, the primal objective value associated with x is f = c T B x B = c T BU 1 1 b 1. (3.6) Thus g and f are equal. Further, x and z clearly exhibit complementary slackness. Therefore, if z N 0 (hence, (ȳ, z) is dual feasible), then x and (ȳ, z) are primal and dual optimal solutions, respectively, as x is assumed to be primal feasible. According to Theorem 3.2, z N 0 is a sufficient condition for the primal feasible solution x to be optimal. If this is not the case, we determine an entering index k q such that z kq = min{ z kj z kj < 0, j = 1,..., n s} < 0. (3.7) Then, we compute a vector ā kq satisfying ] [ v1 Lā kq = a kq, ā kq =, v 2 s m s (3.8) where a kq is the nonbasic column indexed by k q. The following development depends on whether v 2 is zero or not. If it is nonzero, the index k q defined by (3.7) enters the basis immediately, as will be described in section 5.1. If it is zero, otherwise, then the following section is relevant.

5 A Generalization of the Revised Simplex Algorithm for Linear Programming 5 4. Search direction. An assumption throughout this section (including case of s = m): v 2 = 0, (4.1) where v 2 is defined by (3.8). In this case, we determine a downhill search direction x, with respect to the primal objective, in the null of A; or more precisely, A x = 0, c T x < 0. (4.2) Lemma 4.1. The vector x defined below satisfies (4.2): x = [ ] [ ] xb U 1 = 1 v 1, x N e q s n s (4.3) where e q is the unit (n s)-vector with the qth component 1. Proof. From (2.2), (4.3), (3.8) and (4.1), it follows that [ ] U 1 A x = [LU, N] 1 v 1 e q [ ] U1 = L (U v 1 ) + Ne q ] [ v1 = L + Lā kq = L( 0 [ v1 0 ] ] [ v1 + ) v 2 = 0. (4.4) So, x is in the null of A. On the other hand, from (3.7), (3.2) and (3.8), it follows that In addition, (4.3) implies z kq which with (4.5) and (3.7) leads to = c kq a T k q ȳ = c kq ā T k q L T ȳ = c kq [ v T 1, v T 2 ] [ ] U T 1 c B 0 = c kq v T 1 U T 1 c B. (4.5) [ ] c T x = [c T B, c T U 1 1 v 1 N] = c e kq c T BU1 1 v 1, (4.6) q This completes the proof of Lemma 4.1. c T x = z kq < 0.

6 6 Ping-Qi Pan According to Lemma 4.1, vector x is eligible to be a search direction at x. Noting (2.5) and (4.3), consequently, we have the following line search scheme: ˆx B = x B + α x B, (4.7) ˆx kq = α, (4.8) ˆx kj = 0, j N; j q (4.9) where x B is defined by (4.3), and α is a step-length factor to be determined. Lemma 4.2. ˆx, defined by (4.7) (4.9), satisfies Aˆx = b for any α. Proof. The validity can be easily derived from (4.3), (2.5) and Lemma 4.1. Theorem 4.3. If x B 0, then ˆx defined by (4.7) (4.9), is a primal feasible solution for any α > 0; moreover, program (1.1) is unbounded below. Proof. By Lemma 4.2, ˆx satisfies Aˆx = b for any α. In case of α > 0, moreover, it follows from (4.7) (4.9) that ˆx 0 since x B 0 and x B 0 also hold. Thus, ˆx is a primal feasible solution for any α > 0. Further, the associated objective c T ˆx = c T x + αc T x (4.10) goes to minus infinity with α, since c T x < 0 by Lemma 4.1. In the case of x B 0, we maximize the step length α subject to ẑ 0 to achieve the lowest possible objective value. This leads us to determine a step-length α and leaving index j p such that α = x jp / x jp = min { x ji / x ji x ji < 0, i = 1,..., s} 0. (4.11) As in the conventional context, a primal feasible solution x is said to be (primal) degenerate whenever some components of x B are zero. Consequently, the α determined by (4.11) could vanish, as considered an undesirable case, because the solution given by (4.7) (4.9) then coincides with its predecessor. If this is not the case, however, further progress can be guaranteed. Theorem 4.4. If x B 0, then ˆx defined by (4.7) (4.9) along with (4.11) is a feasible solution; moreover, it corresponds to a objective value strictly less than the old if nondegeneracy is assumed. Proof. Note that (4.11) is well defined in case of x B 0. In a similar manner to the proof of Theorem 4.3, it can be shown that ˆx is a primal feasible solution. Under the nondegeneracy assumption, moreover, (4.11) determines a positive α. Consequently, from (4.10) and c T x < 0 (Lemma 4.1), it follows that c T ˆx > c T x. If we introduce an extra variable x n+1 and append c T x x n+1 = 0 to the equality constraints, then the corresponding component of the augmented search direction is x n+1 = c T x. Thus, it follows from (4.10) that ˆx n+1 = x n+1 +α x n+1. Therefore, the value of x n+1 gives the associated (primal) objective value in this case. Such an introduced variable would be termed the (primal) objective variable. Such doing would simplify the implementation. 5. Formulation of the algorithm. Corresponding basis changes will be carried out once entering and/or leaving index have been determined. For these purpose, LUSOL would be the only package suitable for updating LU factors of such a rectangular basis [11, 27]. In this section, we briefly address this issue before formally describing the algorithm.

7 A Generalization of the Revised Simplex Algorithm for Linear Programming Updating iteration. Assume that v 2 0, (5.1) where v 2 is defined by (3.8). In this case, the index k q defined by (3.7) enters the basis immediately. To do so, consider the matrix resulting by appending a kq to the end of B, i.e., B = [B, a kq ]. (5.2) Proposition 5.1. B is a basis, associated with the basic feasible solution x. Proof. By (5.2), (2.2) and (3.8), it holds that [ ] U1 v B = L[U, ā kq ] = L 1, (5.3) 0 v 2 which implies that B has column rank s + 1 since U 1 R s s is nonsigular upper triangular and v 2 is nonzero (5.1). Moreover, the right-hand side b is included in the range of B, since it is in the range of B, a submatirx of B. Therefore, B is a basis. Then, it is clear that x is also associated with the basis ˆB. No new feasible solution is produced from an updating iteration. It is seen from (5.3) that [U, ā kq ] is upper triangular except for its end column. It can be upper triangularized by moving the entry of the largest size among entries of v 2, and then eliminating all entries below the diagonal using a Gaussian transformation. This leads to the LU factors of B directly. Then, we move k q from the nonbasic index set to the end of the basic index set, and set s := s + 1. As one column enters the basis, the associated operations are called an updating iteration Full iteration. In section 4, we described how to form a search direction, update current basic feasible solution, and determine a leaving index. What remains to be done is to compute LU factors of the new basis, associated with the resulting solution. Assume now that an entering index k q and a leaving index j p have been chosen by (3.7) and (4.11), respectively. The resulting matrix, say ˆB, is just B with its pth column a jp replaced by a kq. Proposition 5.2. ˆB is a basis, associated with the new basic feasible solution ˆx defined by (4.7) (4.9). Proof. Note that (4.1) holds in this case. Without loss of generality, assume that p = s, and B has the LU factorization (2.2) (with P and Q being identity permutations), as partitioned: B = [ B, a js ] = L[Ũ, u j s ], (5.4) where B = [a j1,..., a js 1 ] and [Ũ, u j s ] U. The concerned matrix is then ˆB = [ B, a kq ] (5.5) By (3.8) and (4.1), we have ā kq = [ v 1 T, 0 T ] T ; in addition, by (4.10) we have x js 0, which with (4.3) implies that the sth component of v 1 is nonzero (diagonals of U 1 are all nonzero). Therefore, [Ũ, ā k q ] is upper triangular with nonzero diagonals, and hence there exists an unique s-vector w such that u js = [Ũ, ā k q ]w, (5.6)

8 8 Ping-Qi Pan On the other hand, from (5.5), (5.4) and (3.8), it follows that ˆB = [ B, a kq ] = [LŨ, Lā k q ] = L[Ũ, ā k q ], (5.7) which implies that ˆB has rank s since [Ũ, ā k q ] has rank s and L has rank m. Postmultplying by w the two sides of (5.7) leads to which with (5.8) and (5.4) gives ˆBw = L[Ũ, ā k q ]w, ˆBw = Lu js = a js. (5.8) By (5.4), (5.5) and (5.8), it can be asserted that the range of ˆB includes the range of B, and hence b (since B is a basis). It is easy to verify that the ˆx, defined by (4.7) (4.9 is a basic solution associated with ˆB. Practically, LU factors of the new basis ˆB can be obtained by two successive steps: first compute LU factors of the matrix, resulting from deleting a jp from B, and then computing the factors of ˆB by an updating, described in section 5.1. All associated operations constitute a so-called full iteration. Termed downdating, the first step is described next Downdating. Assume a leaving index j p has been determined. Denote by B the matrix resulting from dropping a jp from B. It is clear that B may not be a basis though it is of full column rank s 1. Nevertheless, it is simple to compute LU factors of B from those of B. Consider B = LH, where the upper Hessenberg H is U with its pth column removed. Interchange the pth and sth rows of H; then eliminating the p through (s 1)th entries in turn in the sth row by a sequence of Gauss transformations will turn it into the upper-triangular factor of B. The lower-triangular factor is then easily updated. To complete the downdating, we move j p from the basic index set to the end of the nonbasic index set and set s := s The algorithm. The steps of the overall procedure can be summarized into the following algorithm. Algorithm 1 (Generalized Revised Simplex Algorithm ). Let (2.1 be the initial basic and nonbasic index sets. Given the LU factorization of the associated basis B (2.2) and the associated basic feasible solution x 0 ( x N = 0). This algorithm solves (1.1). 1. Compute z N by (3.3). 2. Stop if z N Determine an entering index k q by (3.7): z kq = min{ z kj z kj < 0, j = 1,..., n s} < Solve lower triangular system Lā kq = a kq for ā kq = [ v T 1, v T 2 ] T (3.8). 5. Go to step 11 if v Solve upper triangular system U 1 x B = v 1 for x B (4.3). 7. Stop if x B 0.

9 A Generalization of the Revised Simplex Algorithm for Linear Programming 9 8. Determine a leaving index j p and a step-length α by (4.11): α = x jp / x jp = min { x ji / x ji x ji < 0, i = 1,..., s} Updated x by (4.7) (4.9). 10. Update LU factors by the downdating associated with j p. 11. Update LU factors by the updating associated with k q. 12. Go to step 1. Note 1: steps 1 11 perform a full iteration, while steps 1 4 and 11 are related to an updating iteration. All iterations fall into such two categories. Note 2: for large-scale and sparse computations, the basis should be refactorized periodically from scratch, as in the conventional contexts. It is noted that an updating iteration involves two triangular solves in step 1 (see (3.4)) and another in step 4. As an additional triangular system is solved in step 6, a full iteration involves four triangular solves. Although a full iteration involves four triangular solves, as simplex algorithms, the computational complexity of the new algorithm is reduced since the size of the s s systems U T 1 u = c B (3.4) and U 1 x B = v 1 (4.3) are small, compared with m m systems solved in simplex algorithms (when s < m). Therefore, Algorithm 1 appears to be particularly suitable for practical applications, as real-world LP problems are often degenerate or even highly degenerate. If s reaches m, on the other hand, such advantages disappear because it performs just like the revised simplex algorithm. A small initial basis seems to be attractive. We give the following result associated with the proposed algorithm. Theorem 5.3. Under the nondegeneracy for full iterations, Algorithm 1 terminates either at (1) Step 2, producing a pair of primal and dual optimal solutions; or (2) Step 7, detecting lower unboundedness of (1.1). Proof. Since there are only finitely many bases, Algorithm 5.3 does not terminate if and only if cycling occurs. Note that the number of columns of the basis either increases by one in updating iterations, or remains unchanged in full iterations. So, cycling can not involve any updating iterations. If nondegeneracy is assured for full iterations, therefore, there will be no chance of cycling at all, since the objective value decreases strictly. Then, by Theorem 3.2, the termination at step 2 produces pair of primal and dual optimal solutions; and by Theorem 4.3, termination at step 7 indicates lower unboundedness. In Theorem 5.3, the nondegeneracy is assumed to guarantee termination of the proposed algorithm. Of course, such an assumption is entirely unrealistic; in fact, degeneracy occurs all the time. Even so, it turned out that cycling rarely happens, if any (see the next section). Algorithm 1 should be regarded as finite in practice, just as the standard simplex algorithm. 6. Computational experiments. In this section, we report numerical results obtained in our computational experiments and make final remarks Test codes. Coded in FORTRAN 77, our code named GRSA 1.0 consisted of two Phases: Phase-2 was based on Algorithm 1; Phase-1 solved an auxiliary problem with piecewise-linear sums of infeasibilities as its objective, as was done commonly in standard codes, like MINOS [28].

10 10 Ping-Qi Pan We used code MINOS 5.3 to make a comparison. In fact, our code was developed using MINOS 5.3 as a platform. Consequently, the two codes shared such features as preprocessing, scaling, LUSOL [11], and etc. Only its modules Mi25bfac.f and Mi50lp.f were replaced by two programs written by the author himself. Very limited changes were made to other parts. Subroutine M2crsh in MINOS 5.3 searches for a (permuted) triangular initial basis. Since there was no need for square basis, subroutine M2crsh was modified by deleting its last lines filling gaps with logical columns. We used Harris s two-pass ratio test [14] for leaving index selection, as in MINOS 5.3. We made every endeavor to ensure a fair competition between the revised simplex algorithm and the generalized revised simplex algorithm proposed. MINOS 5.3 worked with the default threshold pivoting tolerances τ F = 100 for factorization and τ U = 10 for updating. In contrast, GRSA 1.0 worked with τ F = τ U = 10. The lower factorization tolerance improves code s numerical stability, and even lower values such as 5 or 2.5 may be beneficial (see section 4.5 of [12]). Compiled using the Visual FORTRAN 5.0, both MINOS 5.3 and GRSA 1.0 were run under a WINDOWS 98 system on a PENTIUM III 550E personal computer with 256 Mbytes of memory. The machine precision used was about 16 decimal places. The reported CPU times were measured in seconds with utility routine CPU TIME, excluding the time spent on preprocessing and scaling was used for primal and dual feasibility tolerance in the new code. In running both MINOS 5.3 and GRSA 1.0, the usual default options were used, except for: Rows ; Columns ; Elements ; Iterations ; scale yes ; partial price 1 ; log frequency no ; print level Results for set 1. Our first set of test problems included 50 standard LP problems from Netlib 1 that do not have Bounds and Ranges sections in their MPS files, since our current code cannot handle such problems implicitly [10]. Actually, this test set embraced all of such type of Netlib problems available, except for the largest four problems for which m + n > 10000, where m and n are the numbers of rows and columns of A: MAROS-R7 and STOCFOR3 were left for test set 2, and QAP12 and QAP15 were not included in our tests because they are too time consuming to solve, for both MINOS 5.3 and GRSA 1.0. Numerical results obtained with the first set are displayed in Tables 6.1 and 6.2 respectively, in the order of increasing sum m + n, before adding slack variables. In these tables, total iterations and time required for solving each problem are listed in columns labelled Itns and Time, whereas those required by Phase-1 alone listed in columns labelled Itns1 and Time1, respectively; percentages of total and Phase-1 degenerate iterations are offered in columns labelled %Ndeg and %Ndeg1. We indicate that in the columns labelled Itns and Itns1 in Table 6.2 listed are full iteration counts, since for each run of the new code, all updating iterations should be considered together with the basis factorization and refactorizations. Final objective values reached are not listed, as they are the same as those given in the Netlib index file. In order to see how the codes perform as sizes of problems increase, the 50 problems are categorized into three groups: group Small includes the first 20 problems (from AFIRO to SCTAP1), Medium includes the following 15 problems (from SCFXM1 to SHIP04L), and Large includes the last 15 problems ( from QAP8 to TRUSS). Table 6.3 serves as an overall comparison between the two codes. From the bottom line labelled Total there, we see that Phase-1 iteration and Phase-1 time 1

11 A Generalization of the Revised Simplex Algorithm for Linear Programming 11 ratios are (1.03 and 1.15), and total iteration ratio and time ratio are 1.07 and 1.41, respectively. Thus, GRSA 1.0 solved problems with less iterations and run time than MINOS 5.3 with set Results for set 2. In order to see codes behavior in handling problems larger than those in set 1, we included in the second test set 15 much large real-world problems. m + n > holds for most problems in the second test set. Associated results obtained with MINOS 5.3 and GRSA1.0 are respectively listed in Tables 6.4 and 6.5, where the first 8 problems (from CRE-C to OSA-60) are from Kennington 2, the following 5 problems (from RAT7A to NSCT2) from BPMPD 3, and the last 2 problems from Netlib (MAROS-R7 and STOCFOR3). Actually, the Kennington and BPMPD problems were the all such problems that do not have Bounds and Ranges sections in their MPS files and that were of sizes of about more than 500KB in compressed form. It is seen that MINOS 5.3 failed to solve the last three problems (NSCT2, MAROS-R7 and STOCFOR3) while the new code solved all of them successfully with acceptable efforts. So the new code appears more reliable than MINOS 5.3. Table 6.6 offers rations of MINOS 5.3 to GRSA 1.0 only for the first 12 problems. It is seen from its bottom line that both total iteration ratio 1.52 and total time ratio 2.06) are even higher than those with set 1. For Phase-1, iteration and time ratios are as high as 2.66 and 3.09, respectively. Thus, the new code outperformed MINOS 5.3 significantly with set 2, in terms of both iterations and run time. From Tables 6.3 and 6.6, it is seen that time ratios are larger than iteration ratios for both set 1 and set 2. This is so, because computational effort per iteration by the new algorithm was less than that by the revised simplex algorithm in general, due to the use of the deficient basis Effects of degeneracy. We report that GRSA 1.0 terminated at a deficient basis for a large number of the test problems. In order to show how about the sizes of the bases used in the new code, relative to those of classical bases, Table 6.7 gives total and Phase-1 average s/m(%) and final s/m(%). It is seen that these percentages are quite high: around 95%, roughly speaking. This is so since the initial bases used tend to have a high s/m (see the second paragraph of section 6.1). It should be preferable to have a low initial ratio s/m along with some more effective tactic to limit subsequent fill-ins. On the other hand, from the bottom line and column labelled %Ndeg in Table 6.3, we see that the ratio of percentages of total degenerate iterations is Thus, the proportion of such iterations with GRSA 1.0 is larger than that with MINOS 5.3 for set 1. For set 2, moreover, total %Ndeg ratio 0.63 in Table 6.6 reveals even much larger proportion of total degenerate iterations with GRSA 1.0 than with MINOS 5.3. Even so, the new code still outperformed MINOS 5.3 significantly with either of set 1 and set 2. This is quite encouraging since it would serve as a clue to the merit of the proposed algorithm. We draw from these results that there is no inevitable correlation between algorithm s inefficiency and degeneracy, contradicting what we have long thought of. This viewpoint coincides with recent results obtained with a sparse implementation of a so-called revised dual projective pivot algorithm [25]. It is also supported by meszaros/bpmpd/

12 12 Ping-Qi Pan Table 6.1 MINOS 5.3 statistics for set 1 of 50 Netlib problems Problem m n Itns Time %Ndeg Itns1 Time1 %Ndeg1 AFIRO SC50B SC50A ADLITTLE BLEND SHARE2B SC STOCFOR SCAGR ISRAEL SHARE1B SC BEACONFD LOTFI BRANDY E AGG SCORPION BANDM SCTAP SCFXM AGG AGG SCSD SCAGR DEGEN FFFFF SCSD SCFXM SCRS BNL SHIP04S SCFXM FV SHIP04L QAP WOOD1P SCTAP SCSD SHIP08S DEGEN SHIP12S SCTAP STOCFOR SHIP08L BNL SHIP12L D2Q06C WOODW TRUSS

13 A Generalization of the Revised Simplex Algorithm for Linear Programming 13 Table 6.2 GRSA 1.0 statistics for set 1 of 50 Netlib problems Problem m n Itns Time %Ndeg Itns1 Time1 %Ndeg1 AFIRO SC50B SC50A ADLITTLE BLEND SHARE2B SC STOCFOR SCAGR ISRAEL SHARE1B SC BEACONFD LOTFI BRANDY E AGG SCORPION BANDM SCTAP SCFXM AGG AGG SCSD SCAGR DEGEN FFFFF SCSD SCFXM SCRS BNL SHIP04S SCFXM FV SHIP04L QAP WOOD1P SCTAP SCSD SHIP08S DEGEN SHIP12S SCTAP STOCFOR SHIP08L BNL SHIP12L D2Q06C WOODW TRUSS

14 14 Ping-Qi Pan Table 6.3 Ratios of MINOS 5.3 to GRSA 1.0 for set 1 Problem Itns Time %Ndeg Itns1 Time1 %Ndeg1 Small(20) Medium(15) Large(15) Total Table 6.4 MINOS 5.3 statistics for set 2 of 15 test problems Problem m n Itns Time %Ndeg Itns1 Time1 %Ndeg1 CRE-C CRE-A OSA OSA CRE-D CRE-B OSA OSA RAT7A NSCT DBIR DBIR NSCT2 A TRIANGULAR DIAGONAL IS ZERO MAROS-R7 SINGULAR BASIS AFTER 2 FACTORIZATION ATTEMPTS STOCFOR3 CONSTRAINTS CANNOT BE SATISFIED ACCURATELY Table 6.5 GRSA 1.0 statistics for set 2 of 15 test problems Problem m n Itns Time %Ndeg Itns1 Time1 %Ndeg1 CRE-C CRE-A OSA OSA CRE-D CRE-B OSA OSA RAT7A NSCT dbir DBIR NSCT MAROS-R STOCFOR Table 6.6 Ratios of MINOS 5.3 to GRSA 1.0 for 12 test problems in set 2 Problem Itns Time %Ndeg Itns1 Time1 %Ndeg1 Kennington(8) BPMPD(4) Total

15 A Generalization of the Revised Simplex Algorithm for Linear Programming 15 Table 6.7 Ratios s/m(%) Total Total Phase-1 Phase-1 Problem Average Final Average Final Set Set experiments with the steepest-edge pivot rules, which outperformed other rules by large margins even though the proportions of their associated degenerate iterations are quite similar [8]. In summary, the use of the deficient basis is beneficial, and the generalized revised simplex algorithm appears suitable for solving real-world LP problems, compared to the revised simplex algorithm. Nevertheless, it still has much room for improvement. First of all, it is attractive to take the advantage of degeneracy to a large extent; it is worth trying bases as small as possible. In addition, it is important to incorporate the steepest-edge criterion into the new algorithm in an efficient manner. This should lead to great progress, as in the conventional contexts. Acknowledgment. The author thanks Professor George B. Dantzig for his encouragement. He is also grateful to Professor Michael A. Saunders for making valuable and detailed comments and suggestions that improved this paper considerably, and for providing us materials. REFERENCES [1] R.H. Bartels and G.H. Golub, The simplex method of linear programming using LU decomposition, Communication ACM, 12(1969), [2] M. Benichou, J.M. Gauthier, G. Hentges, G. Ribiere, The efficient solution of linear programming problems some algorithmic techniques and computational results, Mathematical Programming, 13 (1977), [3] R.G. Bland, New finite pivoting rules for the simplex method, Mathematics of Operations Research, 2(1977) [4] A. Charnes, Optimality and degeneracy in linear programming, Econometrica, 20 (1952) [5] G. B. Dantzig, Programming in a linear structure, U.S. Air Force Comptroller, USAF, Washington, DC, [6] G.B. Dantzig, A. Orden and P. Wolfe, The generalized simplex method for minimizing a linear form under linear inequality restraints, Pacific Journal of Mathematics, 5 (1955), [7] S.A. Leichner, G.B.Dantzig and J.W. Davis, A strictly improving linear programming Phase I algorithm, Ann. Oper. Res, 47 (1993), [8] J. J. H. Forrest and D. Goldfarb, Steepest-edge simplex algorithms for linear programming, Mathematical Programming, 57 (1992), [9] J.J.H. Forrest and J.A. Tomlin, Updated triangular factors of the basis to maintain sparcity in the product form simplex method, Mathematical Programming, 2 (1972), [10] D. M. Gay, Electronic mail distribution of linear programming test problems, Mathematical Programming Society COAL Newsletter, 13 (1985), [11] P. E. Gill, W. Murray, M. A. Saunders and M. H. Wright, Maintaining LU factors of a general sparse matrix, Linear Algebra and Its Applications, 88/89(1987), [12] P. E. Gill, W. Murray and M. A. Saunders, SNOPT: An SQP algorithm for large-scale constrained optimization, SIAM J. on Optimization, 12 (2002), No. 4, [13] P.E. Gill, W. Murray, M.A. Saunders and M.H. Wright,A practical anti-cycling procedure for linearly constrained optimization, Mathematical Programming, 45 (1989) [14] P.M.J. Harris, Pivot selection methods of the Devex LP code, Mathematical Programming Study, 4 (1975) [15] B. Hattersley and J. Wilson, A dual approach to primal degeneracy, Mathematical Programming, 42 (1988),

16 16 Ping-Qi Pan [16] A.J. Hoffman, Cycling in the simplex algorithm, Report No. 2974, Nat. Bur. Standards, Washington, D.C., [17] P.-Q. Pan, Practical finite pivoting rules for the simplex method, OR Spektrum, 12 (1990), [18] P.-Q. Pan, The most-obtuse-angle row pivot rule for achieving dual feasibility: a computational study, European Journal of Operations Research, 101 (1997), [19] P.-Q. Pan, A dual projective simplex method for linear programming, Computers and Mathematics with Applications, 35 (1998a), No. 6, [20] P.-Q. Pan, A basis-deficiency-allowing variation of the simplex method, Computers and Mathematics with Applications, 36 (1998b), No. 3, [21] P.-Q. Pan, A projective simplex method for linear programmming, Linear Algebra and Its Applications, 292 (1999), [22] P.-Q. Pan, A projective simplex algorithm Using LU decomposition, Computers and Mathematics with Applications, 39(2000), [23] P.-Q. Pan, A dual projective pivot algorithm for linear programming, Computational Optimization and Applications, 29 (2004), [24] P.-Q. Pan, Dual projective pivot algorithm using LU decomposition, submitted. [25] P.-Q. Pan, A revised dual projective pivot algorithm for linear programming, SIAM Journal on Optimizatiion, to appear. [26] P.-Q. Pan and Y. Pan, A phase-1 approach to the generalized simplex algorithm,computers and Mathematics with Applications, 41 (2001), [27] J.K. Reid, A sparsity-exploiting variant of the Bartels-Golub decomposition for linear programming bases, Mathematical Programming, 24(1982), [28] B.A. Murtagh and M. A. Saunders, MINOS 5.5 User s Guide, Technical Report SOL 83-20R, Dept. of Operations Research, Stanford University, Stanford, [29] U. H. Suhl and L. M. Suhl, Computing sparse LU factorizations for large-scale linear programming basis, ORSA J. Comput., 2(1990), [30] T. Terlaky, A convergent criss-cross method, Math. Oper. und Stat. ser. Optimization, 16 (5) (1985),

AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM

AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM JIAN-FENG HU AND PING-QI PAN Abstract. The simplex algorithm computes the simplex multipliers by solving a system (or two

More information

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem A NEW FACE METHOD FOR LINEAR PROGRAMMING PING-QI PAN Abstract. An attractive feature of the face method [9 for solving LP problems is that it uses the orthogonal projection of the negative objective gradient

More information

A projective simplex method for linear programming

A projective simplex method for linear programming Linear Algebra and its Applications 292 (1999) 99±125 A projective simplex method for linear programming Ping-Qi Pan 1 Department of Applied Mathematics, Southeast University, Nanjing 210096, People's

More information

A review of sparsity vs stability in LU updates

A review of sparsity vs stability in LU updates A review of sparsity vs stability in LU updates Michael Saunders Dept of Management Science and Engineering (MS&E) Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering

More information

LUSOL: A basis package for constrained optimization

LUSOL: A basis package for constrained optimization LUSOL: A basis package for constrained optimization IFORS Triennial Conference on OR/MS Honolulu, HI, July 11 15, 2005 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D.

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D. MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC Nebojša V Stojković, Predrag S Stanimirović and Marko D Petković Abstract We investigate the problem of finding the first basic solution

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 4: The Primal Simplex Method 1 Linear

More information

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Christian Keil (c.keil@tu-harburg.de) and Christian Jansson (jansson@tu-harburg.de) Hamburg University of Technology

More information

Revised Simplex Method

Revised Simplex Method DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method István Maros Department of Computing, Imperial College, London Email: i.maros@ic.ac.uk Departmental Technical Report 2002/15 ISSN

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 Stephan Engelke and Christian Kanzow University of Hamburg Department of Mathematics Center

More information

1 Implementation (continued)

1 Implementation (continued) Mathematical Programming Lecture 13 OR 630 Fall 2005 October 6, 2005 Notes by Saifon Chaturantabut 1 Implementation continued We noted last time that B + B + a q Be p e p BI + ā q e p e p. Now, we want

More information

Novel update techniques for the revised simplex method (and their application)

Novel update techniques for the revised simplex method (and their application) Novel update techniques for the revised simplex method (and their application) Qi Huangfu 1 Julian Hall 2 Others 1 FICO 2 School of Mathematics, University of Edinburgh ERGO 30 November 2016 Overview Background

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

arxiv:math/ v1 [math.oc] 22 Dec 2000

arxiv:math/ v1 [math.oc] 22 Dec 2000 The simplest examples where the simplex method cycles and conditions where expand fails to prevent cycling 1 arxiv:math/0012242v1 [math.oc] 22 Dec 2000 J.A.J. Hall and K.I.M. McKinnon 2 Dept. of Mathematics

More information

GYULA FARKAS WOULD ALSO FEEL PROUD

GYULA FARKAS WOULD ALSO FEEL PROUD GYULA FARKAS WOULD ALSO FEEL PROUD PABLO GUERRERO-GARCÍA a, ÁNGEL SANTOS-PALOMO a a Department of Applied Mathematics, University of Málaga, 29071 Málaga, Spain (Received 26 September 2003; In final form

More information

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

Research Article On the Simplex Algorithm Initializing

Research Article On the Simplex Algorithm Initializing Abstract and Applied Analysis Volume 2012, Article ID 487870, 15 pages doi:10.1155/2012/487870 Research Article On the Simplex Algorithm Initializing Nebojša V. Stojković, 1 Predrag S. Stanimirović, 2

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

The practical revised simplex method (Part 2)

The practical revised simplex method (Part 2) The practical revised simplex method (Part 2) Julian Hall School of Mathematics University of Edinburgh January 25th 2007 The practical revised simplex method Overview (Part 2) Practical implementation

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

Transportation Problem

Transportation Problem Transportation Problem Alireza Ghaffari-Hadigheh Azarbaijan Shahid Madani University (ASMU) hadigheha@azaruniv.edu Spring 2017 Alireza Ghaffari-Hadigheh (ASMU) Transportation Problem Spring 2017 1 / 34

More information

GAMS/MINOS MINOS 1 GAMS/MINOS. Table of Contents:

GAMS/MINOS MINOS 1 GAMS/MINOS. Table of Contents: GAMS/MINOS MINOS 1 GAMS/MINOS Table of Contents:... 1. INTRODUCTION... 2 2. HOW TO RUN A MODEL WITH GAMS/MINOS... 2 3. OVERVIEW OF GAMS/MINOS... 2 3.1. Linear programming... 3 3.2. Problems with a Nonlinear

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

February 17, Simplex Method Continued

February 17, Simplex Method Continued 15.053 February 17, 2005 Simplex Method Continued 1 Today s Lecture Review of the simplex algorithm. Formalizing the approach Alternative Optimal Solutions Obtaining an initial bfs Is the simplex algorithm

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

Solving Linear and Integer Programs

Solving Linear and Integer Programs Solving Linear and Integer Programs Robert E. Bixby ILOG, Inc. and Rice University Ed Rothberg ILOG, Inc. DAY 2 2 INFORMS Practice 2002 1 Dual Simplex Algorithm 3 Some Motivation Dual simplex vs. primal

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Scholars Research Library

Scholars Research Library Available online at www.scholarsresearchlibrary.com Scholars Research Library Archives of Applied Science Research, 2010, 2 (1) 28-36 (http://scholarsresearchlibrary.com/archive.html) ISSN 0975-508X CODEN

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Week 4. (1) 0 f ij u ij.

Week 4. (1) 0 f ij u ij. Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known

More information

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered: LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem Volume 118 No. 6 2018, 287-294 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

More information

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small. 0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture Operations Research is the branch of science dealing with techniques for optimizing the performance of systems. System is any organization,

More information

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.: ,-..., -. :', ; -:._.'...,..-.-'3.-..,....; i b... {'.'',,,.!.C.,..'":',-...,'. ''.>.. r : : a o er.;,,~~~~~~~~~~~~~~~~~~~~~~~~~.'. -...~..........".: ~ WS~ "'.; :0:_: :"_::.:.0D 0 ::: ::_ I;. :.!:: t;0i

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Permutations in the factorization of simplex bases

Permutations in the factorization of simplex bases Permutations in the factorization of simplex bases Ricardo Fukasawa, Laurent Poirrier {rfukasawa,lpoirrier}@uwaterloo.ca December 13, 2016 Abstract The basis matrices corresponding to consecutive iterations

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Sparse Rank-Revealing LU Factorization

Sparse Rank-Revealing LU Factorization Sparse Rank-Revealing LU Factorization SIAM Conference on Optimization Toronto, May 2002 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management Sci & Eng University of Auckland

More information