On a Dual Method for a. Specially Structured Linear. Programming Problem. Csaba I. Fabian c

Size: px
Start display at page:

Download "On a Dual Method for a. Specially Structured Linear. Programming Problem. Csaba I. Fabian c"

Transcription

1 R utcor Research R eport On a Dual Method for a Specially Structured Linear Programming Problem Olga Fiedler a saba I. Fabian c Andras Prekopa b RRR 25-95, June 995 RUTOR Rutgers enter for Operations Research Rutgers University P.O. ox 562 New runswick New Jersey Telephone: Telefax: rrrrutcor.rutgers.edu a oedlermath.fu-berlin.de, Freie Universitat erlin, F Mathematik u. Informatik, Arnimalle 2-6, 495 erlin, Germany b prekoparutcor.rutgers.edu, RUTOR, Rutgers enter for Operations Research, New runswick, New Jersey , USA c fabiancs.elte.hu, Visitor to RUTOR and Dept. of OR, L.Eotvos University, Muzeum krt. 6-8, udapest, Hungary, 88

2 Rutcor Research Report RRR 25-95, June 995 On a Dual Method for a Specially Structured Linear Programming Problem Olga Fiedler Andras Prekopa saba I. Fabian Abstract. The paper revises and improves on a Dual Type Method (DTM) developed by A. Prekopa (99) in two ways. The rst improvement allows us, in each iteration, to perform the largest step toward the optimum. The second inprovement consists of exploting the structure of the working basis, which has to be inverted in each iteration of the DTM, and updating its inverse in product form, as it is usual in case of the standard dual method. Acknowledgements: The rst author gratefully acknowledge the support of the Kommission fur die Forderung von Nachwuchswissentschaftlerinnen der FU erlin. The third author gratefully acknowledges the support of the Foundation for the Development of Higher Education in Hungary, project III/4.

3 RRR Page. Introduction 99 A. Prekopa developed the Dual Type Method (DTM) to solve a specially structured linear programming problem (see [6]). This method can be applied to solve Stochastic Programming Problems, where the underlying problem is an LP and some of the right hand side values are discrete random variables the violations of the random constraints are penalized by piecewise linear and convex functions added to the original objective function. It can also be applied to solve special linear programming problems, e.g., the constrained minimum absolute deviation problem (see [8]). In this paper we present an Improved Dual Type Method (IDTM) for the solution of the same kind of programming problems as in [6]. The novelty here is that, in each iteration, the largest possible increase is made toward the optimum value and the inverse of the working basis is updated in the product form. In each iteration of the standard Dual Method (DM) (see [3]) the inverse of the new basis (obtained from the old one by a column-exchange) is updated by the use of simple transformation formulas. In the DTM the inversion of the dual feasible basis ^ reduces (due to its block-structure which arises from the use of the -representation) to the problem of inversion of a smaller size matrix, called working basis, which is taken from the intersection of the th block and ^ (cf. (.6)). The transformation formulas to nd the inverse of the new working basis are, however, more complicated than those in the standard DM, because arises from the old working basis not just by a simple column-exchange, but by the application of more complicated rules. This yields the invertion of the new working basis in each iteration of the DTM. In the IDTM we exploit the structure of the new working basis in such away that we are able to update its inverse (similarly as in the DM) in product form (cf., e.g., (2..), (2.2.9), (2.4.7)). We also give simple transformation formulas to compute the basic solution and the dual vector corresponding to the new dual feasible basis ^. The Algorithm for the IDTM is summarized in Section 3. The implementation of the IDTM, based on.i. Fabian's general optimization routine library called LINX, is described in Section 4. The code is underway for full implementation.

4 Page 2 RRR Formulation of the Problem The IDTM solves the problem (:) min x s.t. fc T x + P r i= f i (y i )g Ax = b T i x = y i i= ::: r x where A =(a ::: a n )isan(m n),t is an (r n) matrix, T ::: T r are the rows of T, and the vectors x b y are of suitable sizes. The f i () are piecewise linear and convex functions dened in the intervals [z i z iki +] with breakpoints at z i <z i <:::<z iki + i= ::: r: For each i = ::: r we will only need the function values f i (z ij ) j= ::: k i +: The convexity off i () holds if and only if the second order divided dierences of the sequence f i (z ij ) j= ::: k i + i= ::: r are positive. Given a discrete function f with values f j = f(z j ) j = ::: k+ its rst and second order divided dierences are dened by the formulas (:2) [z j z j+ f] := f j+;f j z j+ ;z j j= ::: k [z j z j+ z j+2 f] := [z j+ z j+2 f ];[z j z j+ f ] z j+2 ;z j j = ::: k; : Using the -representation for the functions f i (), problem (.) can be reformulated as follows: (:3) min x s.t. fc T x + P r P ki + i= Ax = b T i x = y i j= f i (z ij ) ij g P ki + j= z ij ij = y i P ki + j= ij = x ij i= ::: r j = ::: k i +: We assume that z i T i x z iki + (i = ::: r) holds for any x that satises Ax = b x : If we introduce the notation f i (z ij )=c ij then (.3) can be written in the following form: (:4) min x s:t: fc T x + P r P ki + i= j= c ij ij g Ax = b T i x ; P k i + j= z ij ij = P ki + j= ij = x ij j= ::: k i + i= :::r: The matrix of the equality constraints is subdivided into (r + )blocks: c :::c n c :::c k + ::: c r :::c rkr +

5 RRR Page 3 (:5) A T ;z :::; z k ;z r :::; z rkr+ :::.... ::: T r A th block st block ::: r th block In [6] it is shown that an initial dual feasible basis for the problem (.4) consists of those vectors, which trace out a (so-called) working basis =(a i ::: a im ) from A, the (corresponding to ) matrix T from T in the th block, and the consecutive pairs of vectors in the other blocks the subscripts of the vectors in are chosen in a special way (which is explained below), whereas the subscripts of the consecutive vectors in the blocks ::: r are chosen arbitrarily. Thus (:6) ^ = T ;z j ; z j ;z rjr ; z rjr+.... T r where T i denotes the ith-row oft i = ::: r: The concept of \working basis" was rst introduced by R.Wets in [9]. During the later iterations the basis structure of ^ may change in such away that the working basis may have s ( s r) more columns intersecting the matrix A, and s more rows from the matrix T. In this case there are exactly (r ; s) blocks with a consecutive pairineach block, and s blocks with a single vector in each block. Let S be the set of those row subscripts of T corresponding to which only one ;z ij is in a basic column, and Q := f ::: rgns.then,theworking basis takes the following form: (:7) = A T i (i 2 S)! A where A =(a i ::: a im a im+ ::: a im+s ) : The dual vector corresponding to any basis ^ for the problem (.4) will be partitioned as (y T vt w T ), where y T is an m- component vector and v w are r-component vectors. In the DTM the problem to update the inverse of the new dual feasible basis ^ in each step reduces (due to its special block-structure) to the problem of inverting only a working basis

6 Page 4 RRR 25-95, which isofmuch smaller size than the initial dual feasible basis ^. Unfortunately, the transformation formulas for updating the inverse of the new working basis (which are usually used in simplex type methods) cannot always be applied here in a straightforward manner, because sometimes arises from the old working basis not just by a simple column exchange, but by the use of more complicated rules, such as: () extension of by one additional column and one additional row this happens when a column from one of the last r blocks (with two consecutive vectors in the basis) leaves the basis, and a nonbasic column from the th-block enters the basis, (cf. ase(i), Section 2) (2) exchange of one row in accompanied by a proper change in the coecients of the cost function this happens when a column from a block with two vectors in it leaves the basis, and a column from a block with just one vector in it enters the basis, (cf. ase(ii), Section 2) (3) deletion of one column and one row from this happens when a column from the th-block leaves the basis and the entering column is from one of the last r blocks with just one vector (containing ;z iji ) in a basis, (cf. ase(iv), Section 2). Therefore, in the DTM, one has to invert the new working basis in each step. In the IDTM we are able to construct in each step an auxiliary matrix (with known inverse) such that the new working basis will be obtained from this auxiliary matrix by a simple column-exchange. This allows us for updating in product form. efore we start with the presentation of the IDTM, which is a straightforward extension of the DTM, for the reader's convenience we give now a short description of an iteration of the DTM (cf. [6], pp ): Step. onstruction of an initial dual feasible basis ^: i.) For i = ::: r choose j i satisfying j i k i. Include the columns with the subscripts (i j i ) (i j i +) i = ::: r from the last r blocks of (.5) into the basis ^. (:8) ii.) Determine v i and w i ( i = ::: r) by the equations iii.) Solve the LP: ;z iji v i + w i = c iji ;z iji +v i + w i = c iji + ) i = ::: r : (:9) minimize f P n i=(c i ; v T t i )x i g s:t: a x + :::+ a n x n = b x i i= ::: n where v T =(v ::: v r )and t ::: t r are the columns of the matrix T. Note that the LP (.9) can be solved by anymethodwhichprovides us with a primal-dual feasible basis

7 RRR Page 5 = fa k k 2 I g, where I = fi ::: i m g e.g. by the interior point method of Lustig, Marsten and Shanno, which iswell suitable especially for problems of large sizes (see [4]). Include into ^ in such away that ^ consists now of those vectors, which trace out from A, T from T in the th-block, and the initially selected consecutive pairs in the other blocks. Step. alculation of the corresponding basic solution: Determine the sets S Q then, determine the components of the corresponding basic solution as follows (:) x := ; b where b T := (b T (z iji i2 S)) iji = z ij i +;T i x z iji +;z iji iji + = T ix ;z iji z iji +;z iji i = ::: r: Step 2. Test for primal feasibility: If x iji iji + i = ::: r then STOP. The basis is primal feasible, hence also optimal. Otherwise, choose any basic component which is negative and let the corresponding vector leave the basis. Go to step 3. Step 3. Update all those nonbasic columns from (.5) which may enter the basis, and compute the corresponding reduced costs c p c iji (cf. step 4 in [6]). (Note that to perform an iteration of the DTM one does not need to price all nonbasic vectors in the last r blocks of the matrix of the equality constraints (.5), but just those which, when they enter, do not violate the special structure of the dual feasible basis: from each of the last r blocks either one or two vectors are in the basis.) Step 4. Determine that vector which enters the basis by taking the minimum of the fractions of the reduced costs and the corresponding entries of the leaving variable in the tableau (cf.step 5 in [6]). Go to Step. Remark. We briey show that ^ constructed in Step gives us a dual feasible basis for problem (.4). The nonsingularity of ^ is trivial. To show the dual feasibility of ^, lety be the dual vector corresponding to the optimal basis for problem (.9), i.e. y is any solution of the equation (:) y T = c T ; v T T where c is that part of c which corresponds to : The optimality creterion reads as follows: 2 ya T i = c i ; v T t i ya T i c i ; v T t i i 2 I i 62 I

8 Page 6 RRR which implies: (:2) v T t i + ya T i c i i = ::: n: Inequalities (.2), equalities (.8) and Lemma 2. in [6] imply the dual feasibility of ^. 2 In the next section we explain in detail the construction of the auxiliary matrix (whichisused in IDTM for updating of the working basis in product form), give the respective updating formulas and compute the increase in the value of the dual objective function during one iteration used in our algorithm to perform the largest step toward the optimum. 2. Transformation Formulas In this section, rst we present simple transformation formulas to obtain the new data of problem (.3) from the old data, when passing from the old dual feasible basis ^ to the new one ^ by one of the following column exchanges: (i) deleting the column of ;z qjq block (;z qjq+ ) q 2 Q and including a column from the th (ii) deleting the column of ;z qjq (;z qjq+ ) q2 Q, and including a column of ;z iji ; (;z iji +), i 2 S (iii) deleting a column of ;z qjq (;z qjq+ ) and including a column of ;z qjq+2 (;z qjq; ), q 2 Q (iv) deleting a column from the th block and including a column of ;z iji + (;z iji ;), i 2 S (v) deleting a column from the th block and including a nonbasic column from the same block. In what follows we frequently use the following Lemma : Let A be the following (m m) matrix: A = A A 2 a i::: a ij; a ij a ij+ :::a im A 2 A 22 A

9 RRR Page 7 where a ij 6= i j m and A A 2 A 2 A 22 are (i ; )(j ; ) (i ; )(m ; j) (m ; i)(j ; ) (m ; i)(m ; j) matrices, respectively. Let be the matrix obtained from the matrix A by deleting the ith row and the jth column. Then, the inverse ; is obtained from the inverse A ; by deleting the jth row and the ith column. Now, we consider all ve cases separately ase (i): deleting the column of ;z qjq (;z qjq+ ) q2 Q and including a nonbasic column from the th block. The (old) working basis has the following form (cf. (.7)): (2::) =! A T S where T S = ft i i2 S)g. Let (2::2) I = fi ::: i m i m+ ::: i m+s g S = fh ::: h s g s r: If in the (old) dual feasible basis ^ we exchange the column containing ;zqjq ( ;z qjq+ ) q2 Q for that column from the th block whose subscript is p (that is the column intersecting Aata p ), then we pass to the (new) working basis. It has the form:! A (2::3) = T S where (2::4) I = I [ fpg S = S [ fqg: The new working basis arises from the old one in such away that we extenditbyan additional column and row, where the subscripts of the columns f(a T ij t T S i j ) T j i j 2 I g and those of the rows ft hj j h j 2 S g admit increasing orders. For example, if the the additional row and column are the last ones, the new working basis has the structure: (2::5) = A T S T q To perform one iteration of our algorithm we need to compute the inverse ;,given ;. This is easily done, provided that we are able to construct an auxiliary matrix ~ (with a a p t Sp t qp A :

10 Page 8 RRR known inverse) such that the new working basis will be obtained from this auxiliary matrix by a column-exchange. Then, the inverse of the new working basis is obtained from the inverse of the specially constructed auxiliary matrix in product form. We distinguish the following two cases: case (a): the leaving vector is the column of ;z qjq case (b): the leaving vector is the column of ;z qjq+ and introduce the notations (2::6) ( zqjq+ in case (a) ~z q = ( z qjq in case (b) zqjq+ ; z z q = qjq in case (a) z qjq ; z qjq+ in case (b) ( qjq in case (a) ~ q = qjq + in case (b): Now, we construct the auxiliary matrix.for ~ that, we replace the new column (a T p t T S p) T in (given by (2..3)) by a column in which all elements are, except for ~z q, the latter being placed in the position of t qp. In the example (2..5) the auxiliary matrix has the form: (2::7) ~ = A T S T q z q Let us subdivide the set of subscripts I into two parts L p and U p as follows: A : (2::8) S I = L p Up where L p := fi j i j <pg U p := fi j i j >pg: Then, we have I = L p [ fpg [ Up : Let E p first Ep last be the matrices consisting of the (m+s+)-component unitvectors with subscripts in L p and U p, respectively, and let us introduce the matrix (2::9) ~ F =(E p where the (m+s+)-component vector d () p It is computed as follows: first d () p Elast) p (2::) d () p = ~ ; a p t S p We have that (2::) = ~ ~ F: is placed in the position of the entering column.! :

11 RRR Page 9 Thus, the auxiliary matrix we have been looking for is constructed. It follows that (2::2) ; = ~ F ; ~ ; : It remains to compute F ~ ; and ~ ;. Using Lemma, we obtain that ~ ; is obtained from ; if we extenditby one additional row and column: the additional row enters in the position of p 2 I, and the additional column enters in the position of q 2 S the additional row consists of the elements in (; T q ; z q )and z q, the latter being placed in the position of the additional column in ~ ; the additional column has elements all equal to, with the exception of the new element z q. In our example (2..7), ~ ; has the form: (2::3) ~ ; = ; ; T q ; z q z q Now, using (2..), we can compute d () p the resulting vector consists of the components of the vector d p and one additional element tqp;t qd p z q, the latter being placed in the position of p of the new row in ~ ; : (2::4) d () p = [d p ] p first t qp;tq d p z q [d p ] p last A :! A pth component, where d a p = ; p : t Sp The vectors [d p ] p [d first p] p are obtained from the (m+s)-componentvector d last p if we subdivide it into two parts: the rst part contains those components of d p which have subscripts in L p, and the last part contains those elements which have subscripts in U p.having computed the (m + s + )-component vector d (),we can compute ~ p F ; : (2::5) ~ F ; = E p first ; dp zq t qp;tq d p z q t qp;tq d p Finally, using (2..2), we compute the matrix ; which consists of the matrix ; given below, extended by an additional row and column in the same manner as it was done to the matrix ~ ;. The additonal row (placed in the position of p in ; ) consists of the elements in the row ; T q ; t qp;tq d p and of the element t qp;tq d p, the latter being placed in the position of the additional column. The additional column (placed in the position of q in ; ) consists of the elements in ; dp t qp;tq d p, except for the element t qp;tq d p. The matrix ; is computed as follows: ; = ; + E p last T qd p t qp ; T q d p ; : In case of example (2..5) the inverse of the new working basis has the form: (2::6) ; = ; ; dp t qp;tq d p ; T q ; t qp;tq d p t qp;tq d p A : A :

12 Page RRR Let ~x and x designate the solutions of the following systems of equations: (2::7) ~ ~x = b (2::8) x = b where b := b z iji (i 2 S )! : It is easy to see that the vector ~x in (2..7) consists of the components of the old basic solution x and one additonal component ~ q which is placed in the position of p 2 I : (2::9) ~x = [x ] p first ~ q [x ] p last A pth component where [x ] p first and [x ] p last are obtained from the vector x,ifwe subdivide it into two parts corresponding to the subdivision of the subscript set I = L p S Up. Then, using ( 2..8) and (2..2), we obtain that (2::2) x = ; b = ~ F ; ~ ; b = ~ F ; ~x : If we substitute (2..5) into (2..2), then we obtain (2::2) x = [x ; x () q d p ] p first x () q [x ; x () q d p ] p last A where x() q := ~ q z q t qp ; T q d p : Here ~ q z q and d p are dened in (2..6), and ( 2..4), respectively x is the old basic solution which is dened in (.), and the subdivision of x ; x () q d p is again performed according to the subdivision of the set I into L p and U p. As regards the new basic ;components, we easily nd that (2::22) where 8 < : () ij i + = iji + + x () () ij i =; () ij i + q t ip ;T i d p z iji +;z iji (2::23) Q := Qnq : (i 2 Q ) To dene the new dual vector y := f y i j i 2f ::: mg S S g we have to solve the equation (2::24) (y ) T =(^c ) T where (^c ) T := c T ; (v () ) T T v () := fv () i j i 2 Q g

13 RRR Page and c T are those parts of c and T respectively, which correspond to the basic subscripts of : Now, we have that (2::25) ^c = where [^c + v q T T q] p first ^c p + v q t qp [^c + v q T T q ]p last (2::26) ^c = c ; X h2q A pth component v h t h ^c p := c p ; X h2q v h t hp : Here v h h2 Q are the components of the old dual vector v and ^c +v q T T q is subdivided into two parts corresponding to the subdivision of I = L p S Up. Let us introduce the following notations: (2::27) c p := ^c T d ( p ; ^c p cp =(t K qp = qp ; T q d p ) in case (a), c p =(T q d p ; t qp ) in case (b). Then, using the previously obtained inverse ;,we obtain from (2..24): (2::28) y = [y + K qp (T q ; ) T ] q first v q ; K qp [y + K qp (T q ; ) T ] q last A qth component where the subdivision of the (m+s)-component vector y + K qp (T q ; ) T into two parts is performed corresponding to the subdivision of the set of the subscripts S := f ::: mg S S into L q and U q with respect to the position of the subscript q: The other dual variables fv () S = L q S Uq L q := fj 2 Sjj <qg Uq := fj 2 Sjj >qg: i w () i i2 Q g remain unchanged. Finally, given the old value V = b T y + P i2q w i )we compute the new dual objective function value V : X (2::29) V = b T y + w () i i2q where b are given in (2..8). Using (2..28), we obtain (2::3) V := V ; V = K qp (T q x ; ~z q ) where ~z q and K qp are dened in (2..6) and (2..27), respectively. Summary of ase (i): Suppose a column of ;z qjq (;z qjq+ ) q 2 Q leaves the basis and a nonbasic column from the th block, the column of a p,enters the basis. Then,

14 Page 2 RRR ) the components of the new primal solution, corresponding to, and the ;components are given by (2..2) and (2..22), respectively 2.) the components of the new dual vector corresponding to are given by (2..28), whereas all other components of the new dual vector remain unchanged 3.) the increase V in the value of the dual objective function (which is used in Section 3 to determine the largest step toward the optimum)is given by (2..3). Remark 2..: In case of a minimization problem the dual method produces a nondecreasing sequence of objective function values. Thus, (2..27) and (2..3) imply (2::3) c p t qp ; T q d p (T q x ; ~z q ) : This has the following consequence: in case (a) qjq < and therefore (using (.)) (2::32) T q x ; z qjq+ > : Furthermore, if the column of a p enters the basis and c p, then we have to restrict ourselves to the negative denominator in the expression (2..3), i.e., the entering column of a p should be chosen in such away that (2::33) t qp ; T q d p < in case (b) qjq + < and therefore (using (.)) (2::34) T q x ; z qjq < : Then, using similar considerations as in case (a) we obtain that the entering column of a p has to be chosen in such away that 2 (2::35) t qp ; T q d p > : 2 Remark 2..2: To perform the largest step toward the optimum we proceed in the following manner:.) For each q 2 Q such that qjq < (or qjq + < ) determine a nonbasic subscript p(q) 2f 2 ::: ngni at which the following minimum is attained m(q) := min t qp;tq d p< c p z q f j t qp ; T q d p z q t qp ; T q d p < g

15 RRR Page 3 where z q is as in (2..6) and d p c p are as in (2..4), (2..27). 2.) Then, that pair of leaving and entering vectors which produces the largest increase in the dual objective function value is given by the column of ;z^q and the ^pth column of the th block, where ^p := p(^q). The subscript ^q is determined as follows (2::36) where ~z q is given in (2..6). ^q := arg max q2q f(t q x ; ~z q )K qp(q) g = arg max q2q f T qx ;~z q z q m(q) j T q x ;~z q z q > g Note: The considerations of Remarks 2.. and 2..2 are used in Section 3 for the description of Step 3 of our algorithm. According to Remark 2.., the pair (q p) has to satisfy the inequalities (2..32), (2..33), if the column of z qjq is the leaving one, and (2..34), (2..35), if the column of z qjq+ is the leaving one. Hence, in Steps 3. and 3.2 in Section 3, the pair of subscripts (q j q ) (or (q j q +))varies in Q () (or Q (2) ), Q () Q (2) Q), and the nonbasic subscript p varies in P q () (or P q (2) ) (cf. (3.), (3.7) and (3.3), (3.8), respectively) ase (ii): delete a column of ;z qjq (;z qjq+ ) q2 Q, and include a column of ;z iji ; (;z iji +), i 2 S: In this case the new working basis is obtained from the old working basis by deleting the row T i and including the row T q. The new sets Q and S are obtained as follows: (2:2:) The new working basis has the form: (2:2:2) = Q =(Qnfqg) S fig S =(Snfig) S fqg : A T S First we compute the new primal and dual vectors corresponding to the working basis. We solve equations (2..8) and (2..24) with the new sets Q and S. We distinguish the following two cases: case (a): the entering vector is the column of ;z iji ; case (b): the entering vector is the column of ;z iji + and introduce the notations ( ziji ; z z i = iji ; in case (a) z iji ; z iji + in case (b) ( () (2:2:3) ~ () ij i = i in case (a) ( () ij i + in case (b) ciji ; ; c ~c i = iji in case (a) c iji + ; c iji in case (b).! :

16 Page 4 RRR To compute the new basic solution we proceed similarly as we did in the previous section and construct an auxiliary matrix.for ~ that rst we introduce an intermediate matrix ~, supplementing by one additional column and row asfollows: (2:2:4) ~ = A [T S ] i first T i z i A = [T S ] i last [ ] i first T i z i [ ] i last Here the subdivision of T S (, respectively) into [T S ] i first and [T S ] i last ([ ] i first and [ ] i last, respectively) corresponds to the subdivision of the set of subscripts S into the disjoint sets L i and U i with respect to the position i in S S fig. Thus A : (2:2:5) and Now, let ~x ~y S = L i S Ui L i := fj 2 S j j<ig U i := fj 2 S j j>ig [ ] i first := A [T S ] i first be the solutions of the equations! [ ] i last := [T S ] i last : (2:2:6) ~ ~x = ~ b ~y ~ =~c where (2:2:7) ~ b = b z ljl (l 2 L i ) z iji z uju (u 2 U i ) A = [b ] i first z iji [b ] i last A ~c =(^c + v q T q ~c i ) : The vectors b ^c and ~c i are given in (2..8), (2..26) and (2.2.3), respectively. The vector b is subdivided in accordance with the subdivision of.now, we easily see that (2:2:8) ~x = x ~ () i! ~y = [y ] i first v () i [y ] i last A where the subdivision of the vector y follows the subdivision rule of b and. Thus, to obtain the new primal and dual vectors corresponding to,we need to compute ~ ;. Similarly as in Section 2., we obtain ~ ; in product form, provided that we are able to construct an auxiliary matrix ~ (with a known inverse) which is obtained from ~ bya simple column exchange. We observe that (2:2:9) ~ = ~ ~ F

17 RRR Page 5 where ~ is obtained from the old working basis, ifwe supplement itby one additional row and column as follows: (2:2:) ~ = and A [T S ] q first T q z q A = [T S ] q last [] q first T q z q [] q last A! E (2:2:) F ~ (m+s) d i = ; T qd i where d i = ;^z i : z q The value of z q is dened in (2..6), and the (m + s)-component vector ^z i is obtained from the (m+s+)-component last column of ~,ifwe delete that component which corresponds to the row T q this component is equal to. The subdivision of T S () into [T S ] q first and [T S ] q last ([]q and first []q last ) is performed corresponding to the subdivision of S with respect to the position of q (q 62 S). Thus (2:2:2) S = L q S Uq where L q := fj 2 S j j<qg U q := fj 2 S j j>qg : Now, proceeding in a similar way aswe did in Section 2. and using (2..2), we obtain (2:2:3) ~x = x ; ~ () i d i ~ () i! ~ () i = ~ q z q ;T q d i where ~ q and d i are given in (2..6) and (2.2.), respectively. Furthermore, from (2..28) we obtain that (2:2:4) ~y = [y + K qi (T q ; ) T ] q first v q ; K qi [y + K qi (T q ; ) T ] q last A where the subdivision of the vector y + K qi (T q ; ) T follows the rule given in (2.2.2). The number K qi is computed as follows: if the column of ;z qjq+ (;z qjq ) leaves the basis, then (2:2:5) in case (a): in case (b): K qi = ^c d i ;~c i z i T q u i K qi = ^c d i ;~c i z i T q u i (; ^c d i ;~c i z i T q u i ) (; ^c d i ;~c i z i T q u i ): Here u i := d i =z i and ^c ~c i are dened in (2..26), (2.2.3), respectively. Finally, the new value of the dual objective function is: (2:2:7) V = K qi (T q ; ~z q )

18 Page 6 RRR where ~z q is given in (2..6). Now, using the (intermediate) solutions given in (2.2.3), (2.2.4), we obtain the new primal basic solution and dual vector as follows: (2:2:8) x k =~x k ~ () i = 8 < : k 2 I qjq (z qjq + ;z qjq ) ;T q d i if ; z qjq leaves the basis qjq+ (z qjq + ;z qjq ) T q d i if ; z qjq+ leaves the basis (2:2:9) y l =~y l l2f ::: mg S S (l 6= i) v () i =~c i =z i w () i = c iji ; z iji v () i : The other components of the new primal and dual vectors remain the same as in the previous step. Summary of ase (ii): Suppose a column of ;z qjq (;z qjq+ ) q 2 Q leaves the basis and either (a) a column of ;z iji + or (b) a column of ;z iji ; i2 S enters the basis..) The new primal basic solution is computed as follows: (2:2:2) ( xk ; () x k = () lj l ij i d ik in case (a) x k ; () ij i +d ik in case (b) = z lj l +;T l x z ljl +;z ljl () lj l + =; () lj l k 2 I l 2 Q nfig where d ik are the components of the vector d i given in (2.2.), (2:2:2) () ij i = ( ~ () i in case (a) ; ~ () i in case (b) () ij i + =; () ij i and ~ () i is given in (2.2.8). 2.) The new dual vector corresponding to the working basis is computed as follows y T l = y T l + K qi T q ; l 2f ::: mg [ S where K qi is given by (2.2.5). The components v () i w () i are dened in (2.2.9) and the other components of v () w () remain the same as in the previous step. 3.) The increase V in the value of the dual objective function is: (2:2:22) V = K qi (T q x ; ~z q )

19 RRR Page 7 where ~z q is given in (2..6) (2.2.22) is used in Section 3 for the determination of the largest increase in the objective function value. Remark 2.2.: To perform the largest step toward the optimal value we proceed as follows:.) For each q such that qjq < or qjq + < let [(i j)](q) 2f(i j) j i 2 S j = j i <j i k i g denotes that nonbasic subscript at which the following minimum is attained: m(q) :=minfm + m ; g where m + = min i2s f c ij i + z q z z i T q u i j q z i T q u i g m ; = min i2s f c ij i ; z q z z i T q u i j q z i T q u i g: Here u i := d i =z i, c iji := ^c d i ; ~c i and d i z i are dened in (2.2.), (2.2.3), respectively. 2.) The pair of leaving and entering vectors which produces the largest increase in the dual objective function is given by the columns of ;z qjq and ;z ij, where (i j) :=[(i j)](q), and the subscript q is determined by the equation: (2:2:23) q = arg max t2q where z t and ~z t t2 Q are dened in (2..6). ft tx ; ~z t m(t) j T tx ; ~z t z t z t g 2 The considerations of Remark 2.2. are used in Section 3 for the description of Step 3 of our Algorithm, where the the expression for V in (2.2.22) is used to determine the \optimal" pair of the leaving and entering vectors (cf. the last two lines in (3.) and (3.7), or the last two lines in (3.3) and (3.8)) ase (iii): delete the column of ;z qjq (;z qjq+ )and include the column of ;z qjq+2 (;z qjq; ), q 2 Q: The column exchange in the qth block of the dual feasible basis ^ changes only those components of the primal basic solution, which correspond to the qth block. To obtain the new primal basic solution we need only to compute the new values of f () qj q () qj q+ g the other basic components of f () ij g, aswell as the basic components in x, remain the same. To obtain the new dual vector y we need to compute only those components of

20 Page 8 RRR fv () i w () i i 2 Q g which correspond to the qth block, while the other components of v () and w () remain unchanged. Note that Q = Q: Let us introduce the notations (2:3:) D r := [z qjq z qjq+ z qjq+2 c q: ](z qjq+2 ; z qjq ) D l := [z qjq; z qjq z qjq+ c q: ](z qjq+ ; z qjq; ) where q 2 Q and [z qjq z qjq+ z qjq+2 c q: ], [z qjq; z qjq z qjq+ c q: ] denote the second order divided dierences of the sequence c qj j = ::: k q +, (cf. (.2)). We distinguish the following two cases: case (a): a column of ;z qjq leaves the basis and a column of ;z qjq+2 enters the basis case (b): a column of ;z qjq+ leaves a basis and the column of ;z qjq; enters the basis. Using (.) we can compute the new basic components f () qj q () (2:3:2) () qj q + = 8 >< >: () qj q =; () qj q + : ;(z qjq + ;T q x ) (z qjq +2 ;z qjq + ) (T q x ;z qjq;) (z qjq ;z qjq;) in case (a) in case (b) qj q + g as follows: All the other components of f () ij g remain unchanged. From (.8) we derive v q = ; v () q = c qjq +;c qjq z qjq + ;z qjq = ;[z qjq z qjq+ c q: ] ( ;[zqjq+ z qj +2 c q: ] in case (a) ;[z qjq; z qjq c q: ] in case (b): Using the notations (2.3.) we obtain ( (2:3:3) v () vq ; D q = r in case (a) v q + D l in case (b): Again, using the equations (.8), we have: w q = c qjq+ + z qjq + v q w q () = c qjq+ + z qjq + v q () : If we subtract the second equation from the rst one we obtain ( (2:3:4) w () wq ; D q = r z qjq+ in case (a) w q + D l z qjq in case (b): All the other components of v () w () remain unchanged. Now, using (.) and (2.3.3), the new dual vector can be computed as follows: ( y (2:3:5) y T T = + D r T q ; in case (a) y ; D l T q ; in case (b):

21 RRR Page 9 Finally, for the new value of the dual objective function we obtain ( V ; Dr (z (2:3:6) V := qjq+ ; T q x ) in case (a) V ; D l (T q x ; z qjq ) in case (b) where V = b T y + X i2q w i : If we use the expressions for the reduced costs (cf. [6]): c qjq ; := ;D l (z qjq ; z qjq; ) c qjq +2 := ;D r (z qjq+2 ; z qjq+ ) then the new value of the dual objective function can be computed as follows: (2:3:7) V = 8 >< >: V +c qjq+2 z qjq + ;T q x z qjq +2 ;z qjq + = V ; c qjq+2 () qj q+ in case (a) V +c qjq; ;(z qjq ;T q x ) z qjq ;z qjq; = V ; c qjq ; () qj q in case (b): Summary of ase (iii):.) The components of the primal basic solution fx () ij i i 2 S () lj l l 2 Qg remain the same in the next step, with the exception of the components () qj q () qj q,which correspond to the qth block and are given by (2.3.2). 2.) Those components of the dual vector y which correspond to the new working basis are given by (2.3.5), and those ones which correspond to qth block are given by (2.3.3), (2.3.4). All the other components remain the same. 3.) The new value of the dual objective function V is presented in (2.3.7). Remark 2.3.: In case (a): the qth-component ofv decreases during the iteration furthermore, if z qjq+ >, then w q decreases too, but if z qjq + <, then w q increases. In case (b): the qth-components of v increases during the iteration furthermore, if z qjq >,thenw q increases too if z qjq < then w q decreases. In fact, by the convexity of the discrete function c i wehave thatd r D l given in (2.3.), are positive, and this implies: v q () <v q in case (a) >v q in case (b): v () q The assertion concerning w q can be proved similarly. Note: this above mentioned property of the components of the dual vector is helpful to select the \optimal" initial pairs of the basic vectors in the last r blocks of the matrix (.5).

22 Page 2 RRR Remark 2.3.2: To perform the largest step toward the optimal value we proceed similarly as in Remarks 2.. and The expression (2.3.7) is used for the description of the Algorithm in Section 3 to determine the \optimal" pair of the leaving and entering vectors: compare (2.3.7) with m (q) M a in (3.), (3.7), and m 5 (q) M b in (3.3), (3.8) ase (iv): delete a column from the th block and include a column of ;z iji +(;z iji ;), i 2 S: In this case we delete that column which intersect A at a p and include case (a): the column of ;z iji + i2 S case (b): the column of ;z iji ; i2 S: The new working basis has one column and one row less than the old working basis, and we have an additional block in ^ containing a consecutive pair of vectors, i.e.: (2:4:) I = I nfpg S = Snfig Q = Q [ fig: Similarly as we have done in Section 2., we construct an auxiliary matrix ~ (the inverse of which can be computed in product form), so that the inverse of can be derived from the inverse of ~ by simple operations. Let us introduce the following notations: ( ziji ; z z i = iji + in case (a) z iji ; z iji ; in case (b) (2:4:2) m+(i;) z } { (s;i) z } { ~z i T = ( ::: z i ::: ) c i = ( ciji + ; c iji in case (a) c iji ; ; c iji in case (b): We construct the auxiliary matrix ~ by replacing the column ~zi (given in (2.4.2)) for the column (a T p t T lp (l 2 S)) T in the old working basis : The only non-zero element of~z i is placed in the position of the element t ip i2 S: The corresponding row T i will leave, since the corresponding block i will contain a consecutive pairofvectors and, consequently, itdoes not anymore belong to the th block in the new dual feasible basis ^.For example, if the

23 RRR Page 2 column of a p and the row T i are the last ones, our auxiliary matrix ~ has the following form:! (2:4:3) ~ = : T i z i It is easy to see that we obtain the new working basis from our auxiliary matrix ~ by simply deleting the ith row and the new column ~z i. Then, using Lemma, we obtain the inverse of the new working basis by excluding the pth row and ith column from ~ ;. Now, let us compute ~ ;.For that we subdivide the set of the basic subscripts I,given in (2..2), as follows: (2:4:4) I = L p [ fpg [ Up L p := fi j 2 I j i j <pg U p := fi j 2 I j i j >pg: Let E p first E p last be the matrices consisting of the (m+s)-component unitvectors with subscripts in L p U p, respectively, andintroduce the following matrix: (2:4:5) F =(E p first d E p last): The (m+s)-component vector d is placed in the position p of the outgoing column in. If u i designates the ith column of ;,thenwehave the equation (2:4:6) d = ; ~z i = u i z i : It follows that ~ = F ~ ; = F ; ; where (2:4:7) F ; = E p first ;u i =u ip ;u i2 =u ip. =(u ip z i ). ;u im+s =u ip E p last A and u ik (k 2 I ) denotes the kth component ofthevector u i. Let ~x be the solution of ~ ~x = b, where b is dened in (.). Then we have that (2:4:8) ~x = F ; ; b = F ; x where x is the old basic solution. Now, using (2.4.7), we obtain (2:4:9) ~x k = x k ; (u ik =u ip )x p for k 2 I ~x p = u ip z i x p

24 Page 22 RRR where x j ~x j (j 2 I ) denote the jth component of the corresponding vectors. Now, we observe that the components of the primal basic solution corresponding to (the solution of the equation (2..8)), as well as those components of the primal basic solution which correspond to the last r blocks and are going to be changed during the present iteration, can immediately be determined through the components of our auxiliary solution as follows: (2:4:) x k =~x k for all k 2 I () ij i =~x p () ij i =; () ij i (here + or ; is chosen depending on the incoming vector ): The other components of the primal basic solution remain the same. To obtain the new dual vector, we have to solve equation (2..24) with the new working basis and corresponding I S and Q. First we compute the (auxiliary) solution ~y of the following equation: (2:4:) ~y T ~ =~c T where ~c = We obtain (2:4:2) ~y T =~c T ~ ; =~c T F ; ; : [^c ] p first c i [^c ] p last ) Now, using the expression for F ; given in (2.4.7), we derive (2:4:3) ~y T = [^c ] p first ^c p + c i;^c T d d p [^c ] p last A T ; where the vector d is dened in (2.4.6) and d p is the pth component of the vector d. Since we have [^c ] p T y T first = ^c p A ; [^c ] p last it follows that (2:4:4) ~y T = y T + c i ; ^c T d g p d p where g p denotes the pth row of ;.Nowwe can easily compute our new dual vector by the use of ~y.taking into account the shape of the matrix ~ ;, and using (2.4.2), we obtain that (2:4:5) A : y k =~y k ; c i ~g pk for all k 2f ::: mg S S (k 6= i) v () i =~y i w () i = c iji + z iji v () i

25 RRR Page 23 where ~g pk is the kth element of the pth row of ~ ;, and c i is given in (2.4.2). Using (2.4.6), we can compute the components of the pth row of ~ ; : (2:4:6) ~g pk = g pk =(u ip z i ) for all k 2f ::: mg [ S (k 6= i): The increase of the value of the dual objective function value is: (2:4:7) V := V ; V =(b T y + z iji v () i ) ; b T y = = b (~y ; y )= c i;^c T d d p g p b = c i;^c T d d p x p where b b ^c T and d are given in (.), (2..8), (2..26) and (2.4.6), respectively. Now, using the following expression for the reduced costs (2:4:8) c iji =^c T d ; c i (+ or ; is chosen depending on the incoming vector) and substituting (2.4.6) for d, we can write: (2:4:9) V = ; c ij i z i u ip x p : Summary of ase(iv): Suppose the column of a p from the th block leaves the basis, and we include case (a): a column of ;z iji + i2 S case (b): a column of ;z iji ; i2 S: Then we have the following results:.) The components of the new primal solution, corresponding to the new working basis andtotheith block of the new basis ^, are given by (2.4.). The rest of the basic components of f ij g are: () kj k = z kj k + ; T k x () kj z kjk + ; z k + =; () kj k kjk (k 2 Q nfig): 2.) The components of the dual vector, corresponding to and to the ith block, are given by (2.4.5). The rest of the components of the new dual vector remain the same. 3.) The increase in the value of the dual objective function, which is used to determine the largest step toward the optimum, is given by (2.4.9). Remark 2.4.: To perform the largest step toward the optimal value we proceed similarly as it is described in Remarks 2.. and The expression (2.4.9) is used in the description of Step 3 of the Algorithm in Section 3 to determine the \optimal" pair of the leaving and entering vectors: compare (2.4.9) with m (k) m (k) and M c in (3.5) and (3.9). 2

26 Page 24 RRR ase (v): delete a column from the th block and include a nonbasic column from the same block. The considerations are similar to those in the previous case, hence it is enough to summarize the results. Summary of ase (v): Suppose the column of a` from the th block leaves the basis and a nonbasic column of a p from the th block enters the basis. Then we have the following results:.) The new basic components out of x ::: x n are: where 8 < : x k = x k ; d pk d p` x ` for k 2 I (k 6= p) x p = x ` d p` (2:5:) I =(I [ fpg)nf`g dp = ; a p t Sp! 2.) The new basic -components are: (2:5:2) () ij i + = T i x ; z iji z iji + ; iji () ij i =; () ij i + i2 Q where Q := Q. 3.) The new dual vector y is: ( y k = y k k 2 I k 6= p (2:5:3) y p = y ` ; cp d p` g` where c p denotes the reduced cost (cf. (2..27)) and g` denotes the `th row of the inverse ; the other components of the dual solution remain the same. 4.) The increase in the value of the dual objective function is: : (2:5:4) V = ; c p d p` x `: 2 Remark 2.5..: To perform the largest step toward the optimum we proceed similarly as mentioned in Remarks 2.. and The expression (2.5.4) is used in Section 3 in the description of Step 3 of the Algorithm to determine the \optimal" pair of the leaving and entering vectors: compare (2.5.4) with the second line in (3.5) and (3.9).

27 RRR Page An Algorithm First we introduce the following notations: I := f ::: ngni ^I := fk 2 I j x k < g: and for q 2 Q j q k q : Q () := f(q j q ) j qjq < g Q (2) := f(q j q +)j qjq + < g P q () := fp 2 I j t qp ; T q d p < g P q (2) := fp 2 I j T q d p ; t qp < g S q () := fi 2 S j T q u i > g S q (2) := fi 2 S j T q u i < g:! a Here d p = ; p and u t i is the ith column of ;. The subscripts of the nonbasic Sp columns of the matrix (.5) are: I f(q j) j j 6= j q j q + q2 Qg f(i j) j j 6= j i i2 Sg: 2 In the Algorithm we distinguish three cases: the leaving vector has the subscript case (a): (q j q ) 2 Q () case (b): (q j q +)2 Q (2) case (c): k 2 ^I : In case (a) we introduce the notations: (3:) m a (q) = minfm (q) m 2 (q) m 3 (q) m 4 (q)g m (q) := ;c qjq +2 z qjq +2 ;z qjq + c p m 2 (q) := min p2p q () t qp ; T q d ( p ) c iji ; m 3 (q) := min ; i2s q () j i > (z iji ; z iji ;)T q u i m 4 (q) := i2s (2) q min j i <k i ( ) c iji + (z iji + ; z iji )T q u i : where Moreover, we denote by (q) that element of the set of nonbasic subscripts for which (3:2) (q) =arg 8 minfm (q) m 2 (q) m 3 (q) m 4 (q)g i.e., (q j q +2) if m a (q) =m (q) >< fp j p 2 P q (q) = () g if m a (q) =m 2 (q) f(i j i ; ) j i 2 S q >: () g if m a (q) =m 3 (q) f(i j i +)j i 2 S q (2) g if m a (q) =m 4 (q)

28 Page 26 RRR ((q) gives the subscript of the entering vector which is determined according to the DM). In case (b) we dene: (3:3) m b (q) = minfm 5 (q) m 6 (q) m 7 (q) m 8 (q)g m 5 (q) :=;c qjq; =(z qjq ; z qjq; ) m 6 (q) := minf;c p =(t qp ; T q d p )g m 7 (q) := m 8 (q) := p2p (2) q min i2s (2) q j i > fc iji ;=[(z iji ; z iji ;)T q u i ]g min i2s () q j i <k i f;c iji +=[(z iji + ; z iji )T q u i ]g : Moreover, we denote by (q) that element of the set of nonbasic subscripts for which (3:4) (q)=arg 8 minfm 5 (q) m 6 (q) m 7 (q) m 8 (q)g i.e., (q j q ; ) if m b (q) =m 5 (q) >< fp j p 2 P q (q)= (2) g if m b (q) =m 6 (q) f(i j i ; ) j i 2 S q >: (2) g if m b (q) =m 7 (q) f(i j i +)j i 2 S q () g if m b (q) =m 8 (q) ((q) gives the subscript of the entering vector which is determined according to the DM). Finally, in case (c) we dene: (3:5) m c (k) =minfm 9 (q) m (q) m (q)g m 9 (k) := fc p =d pk g m (k) := m (k) := min p2 I d pk < min i2s j i > u ik < min i2s j i <k i u i (k)> fc iji ;=[(z iji ; z iji ;)u ik ]g f;c iji +=[(z iji + ; z iji )u ik ]g: Here x k d pk u ik denote the kth components of the corresponding vectors. Now, we denote by (k) that element of the set of nonbasic subscripts for which (3:6) (k) =arg 8 minfm 9 (q) m (q) m (q)g i.e., >< fp jp 2 I d pk < g if m c (k) =m 9 (k) (k) = f (i j i ; ) j i 2 S u ik < g if m c (k) =m (k) >: f (i j i +)j i 2 S u ik > g if m c (k) =m (k) ((k) gives the subscript of the entering vector which is determined according to the DM). The considerations of the previous section suggest the following Algorithm IDTM:

29 RRR Page 27 Step. Determine an initial dual feasible basis and the corresponding basic solution: carry out the algorithm described in Section. Step 2. Test for primal feasibility: If x iji iji + i= ::: r then STOP. Otherwise, go to Step 3. Step 3. Simultaneous selection of the leaving and entering vectors: 3.. For all fqjq 2 Q q 2 Q () g compute (3:7) M a = maxf(t q x ; z qjq+ )m a (q)g where m a (q) is computed according to (3.): 3.2. For all fqjq 2 Q q 2 Q (2) g compute (3:8) M b = maxf(z qjq ; T q x )m b (q)g where m b (q) is computed according to (3.3): 3.3. For all fkjk 2 ^I g compute (3:9) M c =maxf;x k m c (k)g where m c (k) is computed according to (3.5): 3:4: ompute M = maxfm a M b M c g: 3:5: Finally, select the leaving and the entering vectors as described below. 3.5.(a) If M is equal to M a and the minimum in (3.) is attained at q, then the pair of leaving and entering vectors which produces the largest step toward the optimal value is either given by the vectors of ;z qjq ;z (q),orbythevector of ;z qjq and the column from the th block with the subscript (q) (q) is determined in (3.2). The new basic solution is dened { if m a (q) =m (q) : as described in the Summary of ase (iii) { if m a (q) =m 2 (q) : as described in the Summary of ase (i) { if m a (q) =m 3 (q) or m (a) (q) =m 4 (q) : as described in the Summary of ase (ii). 3.5.(b) If M is equal to M b and the minimum in (3.2) is attained at q, then the pair of leaving and entering vectors which produces the largest step towards the optimal value is given by the vectors of ;z qj^q + ;z (q),orby the vector of ;z qjq+, and the column from the th block with subscript (q) (q) is determined in (3.4). The new basic solution is dened { if m b (q) =m 5 (q) : as described in the Summary of ase (iii) { if m b (q) =m 6 (q) : as described in the Summary of ase(i) { if m b (q) =m 7 (q) or m b (q) =m 8 (q) : as described in the Summary of ase (ii).

30 Page 28 RRR (c) If M is equal to M c and the minimum in (3.3) is attained at k, then the pair of leaving and entering vectors which produces the largest step toward the optimal value is given either by the columns of a k and a (k) from the th block, or by the column of a k from the th block and the column of ;z (k) out of the last r blocks the subscript (k) isgiven in (3.6). The new basic solution is dened { if m c (k) =m 9 (k) : as described in the Summary of ase (v) { if m c (k) =m (q) or m c (k) =m (q) : as described in the Summary of ase (iv). Go to Step Implementation The implementation of IDTM is based on.i.fabian's general purpose optimization routine library called LINX. It is a tool for interactive solution of linear programming problems. Interactivity here means that this routine collection supports the solution of a series of LP problems, where problems can be generated successively using the experience from the solutions of the former ones. LINX was written in, so it is portable: it can be compiled without modication under MS-DOS, OS/2, and dierent UNIX operating systems, including ULTRIX for DE and SOLARIS for SUN workstations. LINX consists of an LP solver \engine" and a memory handling \hull". The solver is an implementation of the two-phase revised simplex method with the product form of the inverse. oth the primal and the dual methods are implemented. The data can be scaled before optimization to decrease dierences between magnitudes. Tolerances for comparing oating-point numbers are used adaptively, following the suggestions of Maros (see [5]). Unstable transformations are avoided. Small pivots are rejected even at the price of creating small new infeasibilities. This pivot-selection rule is due to Harris (see []). Multiple pricing is used, so candidate vectors can also be rejected at little cost. Precision is checked from time to time. If round-o errors exceed a certain limit (or if the array describing the transformations performed that far is too long), then a reinversion process is started. It is a stabilized version of Hellerman and Rarick's Preassigned Pivot Procedure (see [2]). The original method is unstable. The version used here proved eective over 4 years of experience. The speed of computation was the next objective after reliability. A RASH procedure is available to break initial degeneracy and speed up optimization. A practical implementation of the steepest-edge method can also be used optionally. Special sort and search methods accelerate vector and pivot selection. The memory handling hull ensures that computations of former problems are available when required. If a problem is derived from a former one by

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b R utcor Research R eport Bounding in Multi-Stage Stochastic Programming Problems Olga Fiedler a Andras Prekopa b RRR 24-95, June 1995 RUTCOR Rutgers Center for Operations Research Rutgers University P.O.

More information

Maximization of a Strongly Unimodal Multivariate Discrete Distribution

Maximization of a Strongly Unimodal Multivariate Discrete Distribution R u t c o r Research R e p o r t Maximization of a Strongly Unimodal Multivariate Discrete Distribution Mine Subasi a Ersoy Subasi b András Prékopa c RRR 12-2009, July 2009 RUTCOR Rutgers Center for Operations

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

A Method of Disaggregation for. Bounding Probabilities of. Boolean Functions of Events

A Method of Disaggregation for. Bounding Probabilities of. Boolean Functions of Events R utcor Research R eport A Method of Disaggregation for Bounding Probabilities of Boolean Functions of Events Andras Prekopa a Bela Vizvari b Gabor Reg}os c RRR 1-97, January 1998 RUTCOR Rutgers Center

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

R u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a

R u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a R u t c o r Research R e p o r t Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case Gergely Mádi-Nagy a RRR 9-28, April 28 RUTCOR Rutgers Center for Operations

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

Primal vector is primal infeasible till end. So when primal feasibility attained, the pair becomes opt. & method terminates. 3. Two main steps carried

Primal vector is primal infeasible till end. So when primal feasibility attained, the pair becomes opt. & method terminates. 3. Two main steps carried 4.1 Primal-Dual Algorithms Katta G. Murty, IOE 612 Lecture slides 4 Here we discuss special min cost ow problems on bipartite networks, the assignment and transportation problems. Algorithms based on an

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO350 Linear Programming Chapter 6: The Simplex Method 8th June 2005 Chapter 6: The Simplex Method 1 Minimization Problem ( 6.5) We can solve minimization problems by transforming it into a maximization

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

On Strong Unimodality of Multivariate Discrete Distributions

On Strong Unimodality of Multivariate Discrete Distributions R u t c o r Research R e p o r t On Strong Unimodality of Multivariate Discrete Distributions Ersoy Subasi a Mine Subasi b András Prékopa c RRR 47-2004, DECEMBER, 2004 RUTCOR Rutgers Center for Operations

More information

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 13 Dr. Ted Ralphs IE406 Lecture 13 1 Reading for This Lecture Bertsimas Chapter 5 IE406 Lecture 13 2 Sensitivity Analysis In many real-world problems,

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Mathematical Programming 19 (1980) 213-219. North-Holland Publishing Company COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Katta G. MURTY The University of Michigan, Ann Arbor, MI, U.S.A.

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Single commodity stochastic network design under probabilistic constraint with discrete random variables

Single commodity stochastic network design under probabilistic constraint with discrete random variables R u t c o r Research R e p o r t Single commodity stochastic network design under probabilistic constraint with discrete random variables András Prékopa a Merve Unuvar b RRR 9-2012, February 2012 RUTCOR

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res. SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD FOR QUADRATIC PROGRAMMING Emil Klafszky Tamas Terlaky 1 Mathematical Institut, Dept. of Op. Res. Technical University, Miskolc Eotvos University Miskolc-Egyetemvaros

More information

Sensitivity Analysis

Sensitivity Analysis Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect

More information

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION R u t c o r Research R e p o r t SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION Ersoy Subasi a Mine Subasi b András Prékopa c RRR 4-006, MARCH, 006 RUTCOR Rutgers Center for Operations Research

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Discrete Moment Problem with the Given Shape of the Distribution

Discrete Moment Problem with the Given Shape of the Distribution R u t c o r Research R e p o r t Discrete Moment Problem with the Given Shape of the Distribution Ersoy Subasi a Mine Subasi b András Prékopa c RRR 41-005, DECEMBER, 005 RUTCOR Rutgers Center for Operations

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

Studia Scientiarum Mathematicarum Hungarica 42 (2), (2005) Communicated by D. Miklós

Studia Scientiarum Mathematicarum Hungarica 42 (2), (2005) Communicated by D. Miklós Studia Scientiarum Mathematicarum Hungarica 4 (), 7 6 (5) A METHOD TO FIND THE BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM IF THE BASIS STRUCTURE IS GIVEN G MÁDI-NAGY Communicated by D Miklós

More information

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Dual Basic Solutions Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Observation 5.7. AbasisB yields min c T x max p T b s.t. A x = b s.t. p T A apple c T x 0 aprimalbasicsolutiongivenbyx

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

Math Models of OR: Sensitivity Analysis

Math Models of OR: Sensitivity Analysis Math Models of OR: Sensitivity Analysis John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 8 USA October 8 Mitchell Sensitivity Analysis / 9 Optimal tableau and pivot matrix Outline Optimal

More information

UNIVERSITY of LIMERICK

UNIVERSITY of LIMERICK UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH Department of Mathematics & Statistics Faculty of Science and Engineering END OF SEMESTER ASSESSMENT PAPER MODULE CODE: MS4303 SEMESTER: Spring 2018 MODULE TITLE:

More information

Downloaded from iors.ir at 10: on Saturday May 12th 2018 Fuzzy Primal Simplex Algorithms for Solving Fuzzy Linear Programming Problems

Downloaded from iors.ir at 10: on Saturday May 12th 2018 Fuzzy Primal Simplex Algorithms for Solving Fuzzy Linear Programming Problems Iranian Journal of Operations esearch Vol 1, o 2, 2009, pp68-84 Fuzzy Primal Simplex Algorithms for Solving Fuzzy Linear Programming Problems ezam Mahdavi-Amiri 1 Seyed Hadi asseri 2 Alahbakhsh Yazdani

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method István Maros Department of Computing, Imperial College, London Email: i.maros@ic.ac.uk Departmental Technical Report 2002/15 ISSN

More information

(includes both Phases I & II)

(includes both Phases I & II) (includes both Phases I & II) Dennis ricker Dept of Mechanical & Industrial Engineering The University of Iowa Revised Simplex Method 09/23/04 page 1 of 22 Minimize z=3x + 5x + 4x + 7x + 5x + 4x subject

More information

Probabilities of Boolean. Functions of Events

Probabilities of Boolean. Functions of Events R utcor Research R eport Lower and Upper Bounds on Probabilities of Boolean Functions of Events Andras Prekopa a Bela Vizvari b Gabor Reg}os c RRR 36-95, September 1995. Revised May 1996 RUTCOR Rutgers

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Novel update techniques for the revised simplex method (and their application)

Novel update techniques for the revised simplex method (and their application) Novel update techniques for the revised simplex method (and their application) Qi Huangfu 1 Julian Hall 2 Others 1 FICO 2 School of Mathematics, University of Edinburgh ERGO 30 November 2016 Overview Background

More information

Transportation Problem

Transportation Problem Transportation Problem Alireza Ghaffari-Hadigheh Azarbaijan Shahid Madani University (ASMU) hadigheha@azaruniv.edu Spring 2017 Alireza Ghaffari-Hadigheh (ASMU) Transportation Problem Spring 2017 1 / 34

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010

More information

(includes both Phases I & II)

(includes both Phases I & II) Minimize z=3x 5x 4x 7x 5x 4x subject to 2x x2 x4 3x6 0 x 3x3 x4 3x5 2x6 2 4x2 2x3 3x4 x5 5 and x 0 j, 6 2 3 4 5 6 j ecause of the lack of a slack variable in each constraint, we must use Phase I to find

More information

1 Simplex and Matrices

1 Simplex and Matrices 1 Simplex and Matrices We will begin with a review of matrix multiplication. A matrix is simply an array of numbers. If a given array has m rows and n columns, then it is called an m n (or m-by-n) matrix.

More information

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider

More information

Revised Simplex Method

Revised Simplex Method DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

A CONVEXITY THEOREM IN PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS

A CONVEXITY THEOREM IN PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS R u t c o r Research R e p o r t A CONVEXITY THEOREM IN PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS András Prékopa a Mine Subasi b RRR 32-2007, December 2007 RUTCOR Rutgers Center for Operations Research

More information

Simplex tableau CE 377K. April 2, 2015

Simplex tableau CE 377K. April 2, 2015 CE 377K April 2, 2015 Review Reduced costs Basic and nonbasic variables OUTLINE Review by example: simplex method demonstration Outline Example You own a small firm producing construction materials for

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 22th June 2005 Chapter 8: Finite Termination Recap On Monday, we established In the absence of degeneracy, the simplex method will

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Simplex Method: More Details

Simplex Method: More Details Chapter 4 Simplex Method: More Details 4. How to Start Simplex? Two Phases Our simplex algorithm is constructed for solving the following standard form of linear programs: min c T x s.t. Ax = b x 0 where

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

MATH 445/545 Test 1 Spring 2016

MATH 445/545 Test 1 Spring 2016 MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these

More information

R u t c o r Research R e p o r t. The Optimization of the Move of Robot Arm by Benders Decomposition. Zsolt Robotka a RRR , DECEMBER 2005

R u t c o r Research R e p o r t. The Optimization of the Move of Robot Arm by Benders Decomposition. Zsolt Robotka a RRR , DECEMBER 2005 R u t c o r Research R e p o r t The Optimization of the Move of Robot Arm by Benders Decomposition Zsolt Robotka a Béla Vizvári b RRR 43-2005, DECEMBER 2005 RUTCOR Rutgers Center for Operations Research

More information

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Transaction E: Industrial Engineering Vol. 16, No. 1, pp. 1{10 c Sharif University of Technology, June 2009 Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Abstract. K.

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Linear programs Optimization Geoff Gordon Ryan Tibshirani Linear programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Review: LPs LPs: m constraints, n vars A: R m n b: R m c: R n x: R n ineq form [min or max] c T x s.t. Ax b m n std form [min or max] c

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

Operations Research Lecture 6: Integer Programming

Operations Research Lecture 6: Integer Programming Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the

More information

Method of Multivariate Lagrange Interpolation for Generating Bivariate Bonferroni-Type Inequalities

Method of Multivariate Lagrange Interpolation for Generating Bivariate Bonferroni-Type Inequalities R u t c o r Research R e p o r t Method of Multivariate Lagrange Interpolation for Generating Bivariate Bonferroni-Type Inequalities Gergely Mádi-Nagy a András Prékopa b RRR 10-2009, June 2009 RUTCOR Rutgers

More information

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems Zsolt Csizmadia and Tibor Illés and Adrienn Nagy February 24, 213 Abstract In this paper we introduce the

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Fundamental Theorems of Optimization

Fundamental Theorems of Optimization Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Solving LP and MIP Models with Piecewise Linear Objective Functions

Solving LP and MIP Models with Piecewise Linear Objective Functions Solving LP and MIP Models with Piecewise Linear Obective Functions Zonghao Gu Gurobi Optimization Inc. Columbus, July 23, 2014 Overview } Introduction } Piecewise linear (PWL) function Convex and convex

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

Chapter 33 MSMYM1 Mathematical Linear Programming

Chapter 33 MSMYM1 Mathematical Linear Programming Chapter 33 MSMYM1 Mathematical Linear Programming 33.1 The Simplex Algorithm The Simplex method for solving linear programming problems has already been covered in Chapter??. A given problem may always

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information