Tibor ILLèS. Katalin MèSZêROS

Size: px
Start display at page:

Download "Tibor ILLèS. Katalin MèSZêROS"

Transcription

1 Yugoslav Journal of Operations Research 11 (2001), Numer A NEW AND CONSRUCIVE PROOF OF WO ASIC RESULS OF LINEAR PROGRAMMING * ior ILLèS Department of Operations Research E tv s LorÃnd University of Sciences udapest, Hungary Katalin MèSZêROS Faculty of Economics, University of Novi Sad Suotica, Yugoslavia Astract: In is paper a new, elementary and constructive proof of Farkas' lemma is given. he asic idea of e proof is extended to derive e strong duality eorem of linear programming. Zhang's algorims used, in e proofs of Farkas' lemma and e strong duality eorem, are criss-cross type algorims, ut e pivot rules give more flexiility an e original criss-cross rule of. erlaky. he proof of e finiteness of e second algorim is technically more complicated an at for e original crisscross algorim. o of e algorims defined in is paper have all e nice eoretical characteristics of e criss-cross meod, i.e. ey solve e linear programming prolem in one phase; ey can e initialized wi any, not necessarily primal feasile asis, ases generated during e solution of e prolem, are not necessarily primal or dual feasile. Keywords: Farkas lemma, strong duality eorem, criss-cross type pivot rules. * 1991 Maematics Suject Classification. 90C05. he first auor was supported in part y e Hungarian Ministry of Education under e grant FKFP No. 0152/1999 and y e Hungarian National Research Fund under e grant OKA O he first version of is paper was prepared during e visit of e first auor to W rzurg University, Institute of Applied Maematics and Statistics, sponsored y a scholarship from e Deutscher Akademischer Austaushdienst, DAAD A/98/07349.

2 16. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 1. INRODUCION On e 150 anniversary of e ir of Gyula Farkas in 1997, S. Zhang [14] pulished two new and finite pivot algorims for solving linear programming prolems. Zhang's algorims are generalizations of erlaky's Criss-cross meod [11, 12, 13]. 1 Klafszky and erlaky [7,8] gave a constructive proof to e well-known lemma of Gy. Farkas [2, 3]. Using e first algorim (FILO/LOFI rule) of Zhang [14, 15] and e so-called orogonality eorem (see for instance [7, 8, 5, 6]) we give herein a constructive proof to Farkas' lemma in a similar way as Klafszky and erlaky did in eir papers [7, 8]. his kind of constructive proof can e extended to verify e well-known strong duality eorem. We use Zhang's second algorim wi e most-often selected rule [14, 15]. Our proofs of e finiteness of Zhang's algorims are simpler an e original one. Let A R m n n m, c,x R, y, R and I = { 1, 2,..., n}. Wiout loss of generality we may assume at e rank of A is m, us A has full row rank. Let denote e i row vector of e matrix A, while j m a R denotes e ( i ) a R n j column vector of it. In our paper e following form of e Farkas lemma is proved in Section 2. heorem 1.1. (Farkas' lemma) From e following two systems of linear inequalities exactly one is solvale: Ax = x 0 ( A 1 ) y y 0 = 1 A ( A2 ) Our second goal in is note is to give constructive proof for e strong duality eorem of e linear programming prolem (Section 3). Now, let us consider e primal and dual linear programming prolems in e following form minc x Ax = x 0 (P) max y y A c (D) Furermore, let P e e set of primal feasile solutions 2, namely P : = { x R A x = ) n 1 Zhang [14, 15] proved e finiteness of one of his algorim, following e steps of erlaky's original proof [11, 12]. 2 n n n he R is e positive orant, us R = { x R : x 0}.

3 . IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 17 and let e set of dual feasile solutions, D, e given as m D = { y R y A c}. heorem 1.2. (Strong duality eorem) From e following two statements exactly one holds: = (1) here exists xˆ P and ŷ D such at c xˆ yˆ. (2) P = 0/ or D = 0/. Let us introduce e (primal) pivot taleau for e (primal) linear programming prolem, as follows A c * where all e data related to e prolem are arranged. Under e assumption at matrix A has full row rank, ere exists an m m nonsingular sumatrix A of A. Let us interchange e columns of A to otain e following partition A = ( A, AN ), where e sumatrix A N contains ose columns of A which do not elong to A. Now e linear system A x = can e written as A x A x =, where we group + N N e unknowns in e same way as e columns of matrix A, namely x = x, x ). Similarly, we can reorder e components of e vector c as c = c, c ). ( N ( N Now we are ready to restate some well-known concepts of linear algera and linear programming such as asis, asic solution, feasile asic solution, optimal solution and orogonality. Definition Any m m nonsingular sumatrix A of A is called a asis he x = A, x N = 0 is a asic solution of A x = for a given A. 3. Variales grouped in x are called asic variales, while ose corresponding to x N are called nonasic variales If A 0 en we say at x, ) is a (primal) feasile solution and A is a (primal) feasile asis. 1 ) 5. he vector y = ( c A R A 1 A 6. If c c holds en ( x N m A is called a dual asic solution. is said to e a dual feasile asis. 7. he primal feasile solution x P is said to e an optimal solution of e primal prolem, if c x c x holds for all x P.

4 18. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 8. he dual feasile solution y D is said to e an optimal solution of e dual prolem, if is e following y y holds for all y D. he (primal) pivot taleau corresponding to e asis A for e LP prolem 3 c A 1 A c A 1 A c 1 A 1 A and let us introduce e following notations = A = A A, = A, c = c c A A, and ζ c. he set of asic indices corresponding to e asis A is denoted y, while e set of nonasic indices is denoted y N. rivially, I = N. We need e concept of orogonality among vectors. Definition 1.4. Let a = 0. k a, R en vectors a and are said to e orogonal, if Using e pivot taleau we can introduce e following n-dimensional (column) vectors: and t ( i) ( i) n = ( tk ) k= 1 t ik, if k N = 1, if k = i 0, if k, k i t t kj, if k = 1, if k = j 0, if k N k n j = ( t( j ) k ) k = 1, j ( i) where t, i is equal to e i row of, while t, j N is formed from e column of extended y an ( n m) - dimensional negative unity vector. 4 j j 3 For e system ( A 1 ) e pivot taleau corresponding to e asis of A is simpler as you may see: A 1 A 1 A

5 . IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 19 he following useful oservation is called e orogonality eorem (see for instance [7, 8, 6]). Proposition 1.5. Let a linear system are ases of e linear system, en A x = e given. Furermore, A and A ( i) j (t ) = 0 for all i and for all j N, holds, where and are e index sets corresponding to e ases respectively. A and, A heorem 1.1 is proved in Section 2. First we define an algorim to solve e system A ) and prove its finiteness. he algorim eier solves A ) or gives a certificate for e nonexistence of a solution. In is second case, using elementary computations, we can compute e solution of system A ). he solvaility of e LP prolem (P) is discussed in Section 3. A pivot algorim is defined using e (mostoften-selected variale) pivot rule of Zhang, [14, 15]. he finiteness of is second algorim is proved. he strong duality eorem, heorem 1.2., is otained as an easy consequence of e finiteness of e algorim. o of e presented algorims have e general property of e criss-cross meod [4], namely at e system A ) is solved wiout introducing artificial variales and using e so-called first phase ojective function (or oer techniques like e ig-m meod, [9]). Consequently, we do not need two phases to solve prolem (P), ecause e algorim can e initiated y any (not necessarily primal feasile) asis. Our proofs are purely cominatorial, erefore e only information at is used is e sign of e entries of e pivot taleau. hus, we use e alinski-ucker [1] notation which is very convenient for our purposes. Positive, nonnegative, negative and nonpositive numers are denoted y +,,, signs, respectively. If an entry in e taleau is denoted y en ere is no information aout e sign of at entry. ( 2 2. PROOF OF HE FARKAS LEMMA First, let us deal wi e solution of e system ( A 1 ). We introduce e following mapping ur : I N0, and let u 0 = ( 0, 0,..., 0) and r, ur ( i) = u r-1 if e i ( i), oerwise variale moves in e r iteration 4 he vector t j is a column of e dual simplex taleau as it is defined in [10].

6 20. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results for r = 1, 2,..., k. It is easy to show at u r u r 1 and u r u r 1. he asic idea of e pivot rule is e following: from e infeasile variales choose e one to leave e current asis which entered most recently and from ose which are candidates to enter e asis choose e one which has left e asis most recently. Algorim 2.1. Let a asis r = 1. A for e system A ) e given wi e corresponding pivot taleau. Let Step 1. Let J : { i : x < 0}. = i If J = 0/ en e system A ) is solved, SOP Step 2. Let K : { j N : t < 0}. Step 3. Now, else let Jmax : = { j J : ur1 ( j) ur1 ( i), for all i J} choose an aritrary index k Jmax and go to Step 2. = kj If K = 0/ en e system A ) is solved, SOP else let K max : = { j K : ur1 ( j) ur1 ( i), for all i K}, choose an aritrary index l Kmax and go to Step 3. x k leaves and x l enters e current asis. Let us update e vector u as follows r, if i = k or i = l ur ( i) = ur 1 ( i), oerwise Increase e value of r y 1, namely r : = r + 1 and go to Step 1. Let us extend e definition of e vectors (i) t and t j to e column of, as well. From now on, we assume at e index (which elongs to e column vector ) is always in e set of N. Now we can apply e orogonality eorem for e matrix [ A, ] and en e vectors Lemma 2.2. Algorim 2.1 is finite. (i) t and j t ecome n tuples. Proof: y contradiction. Let us assume at e algorim is not finite. his means at ere exists (at least) one example in which e algorim is not finite, us it generates an infinite sequence of pivot taleaus, i.e. infinite sequence of ases. ut e numer of all possile ases for a given prolem (wi an m n matrix A ) is finite (at

7 . IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 21 n most, erefore some of e ases should occur infinitely many times. 5 From m ose examples for which cycling occurs choose one wi e smallest possile size. For such prolems all e variales have to change eir asis status during a cycle. Let us consider e sequence of pivot taleaus generated y e algorim and let us denote y at which satisfies e following criteria: ere is a variale x q which changes its asic status for e first time; after is pivot taleau all e variales have changed eir asic status at least once. We have two choices: e variale x q eier enters or leaves e ases at e pivot taleau. It follows from our counter assumption at ere should e anoer asic taleau, such at e variale x q will change its asic status for e first time since. Let us analyze e (sign) structure of e pivot taleaus Case 1. Let us deal wi e case when e variale and enters follows. hen e sign structure of e pivot taleaus and. x q leaves e pivot taleau and are as xl Figure 1. (l) Using e orogonality eorem (Proposition 1.5) e vectors and t are orogonal. On e oer hand, ased on e pivot rule of Algorim 2.1 if t < 0, for some i N \{ q, } en ur 1 ( i) < ur1 ( q), means at e variale x i did not change its asic status since e pivot taleau, erefore i N, so t i = 0. From is oservation we may get at ( l) = l lq q 0 ( t ) + t > 0, ecause t = 1, < 0, t < 0 and t < 0. A contradiction is otained. q l lq li 5 his phenomena is known in e literature as cycling, see for instance [9, 10, 5, 6].

8 22. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results Case 2. Let us assume at e variale leaves at. x q enters e asis at pivot taleau and xl Figure 2. aking into consideration e sign structure of ese taleaus, e pivot rule of Algorim 2.1. and using e orogonality eorem (Proposition 1.5), as in e previous case, we can show at o and cannot occur in e sequence of pivot taleaus generated y Algorim 2.1. herefore Algorim 2.1. is not cycling. Now, we are ready to prove e Farkas' Lemma. Proof of heorem 1.1: Let us assume at o A ) and A ) have a solution, en from ( 2 A x = it follows at y A x = y. aking into consideration at y A 0 and x 0 it follows at 0 y A x = y = 1, ecause y = 1 solution. holds. his is a contradiction, us o systems cannot have a We need to show at one of e systems is solvale. Let us apply Algorim 2.1. to system A ). According to Lemma 2.2. Algorim 2.1. is finite, erefore it eier stops in Step 1 wi J = 0/, which means at a (asic) feasile solution of e system A ) is found, or it reports at K = 0/ (Step 2), us we otain a pivot taleau (k) such at t 0 inequalities and < 0. 6 Now it is ovious at e following system of k 6 his is known from e literature as primal infeasiility criteria, see for instance [9, 10, 5, 6].

9 . IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 23 ( t ( k) ) x =, k x 0 (1) has no solution. herefore e system ( A 1 ) cannot have a solution. Using e corresponding asis we can compute a solution 7 of e system ( A 2 ) as 1 (( k) A 1 ) k y = e, where k m e R is e k unity vector. 3. PROOF OF HE SRONG DUALIY HEOREM Let us consider e primal linear programming prolem, (P). Let us introduce e following mapping v r : I N0, and let v 0 = ( 0, 0,..., 0), furermore v = ( ) +, ( ) 1 i 1 if e i v i r r vr 1( i), oerwise. Vector status until e end of e Algorim 3.1. variale moves in e r iteration v r counts how many times e variales have changed eir asic r iteration. Let a asis A of e primal linear programming prolem (P) e given wi e corresponding pivot taleau and let r = 1. Step 1. Let J : { i I : x < 0 or < 0}. = i c i If J = 0/ en e current asic solution is optimal, SOP, Step 2. (a) Primal iteration: else let Jmax : = { j J : vr1 ( j) vr1 ( i), i J} and choose an aritrary index k Jmax. k N. Define e set of indices K { i : t > 0}. P = ik If K = 0/ en D = 0/, ere is no dual feasile solution, SOP, p else let K : { j K : v ( j) v ( i), i K } P, max = P r1 r1 P 7 Using e orogonality eorem (Proposition 1.5) it is easy to check at if e current pivot taleau is denoted y and e corresponding asis y A en tkj = (z k) a j and k = ( zk), 1 m where z k = (( ek) A ) and ek R is e k unity vector. See for instance [7, 8, 5, 6].

10 24. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results Now, and choose an aritrary index l K P, max. x l leaves e asis, while x k enters it, and vr 1( i) + 1, if i = k or i = l vr ( i) = vr 1( i), oerwise. Increase e value of r y 1 and go to Step 1. () Dual iteration: k. Define e set of indices K : { i N : t < 0}. D = ki If K = 0/ en P = 0/, ere is no primal feasile solution, SOP, Now, D else let K : { j K : v ( j) v ( i), i K } D, max = D r1 r1 D and choose an aritrary index l K D, max. x k leaves e asis, while x l enters it, and vr 1( i) + 1, if i = k or i = l vr ( i) = vr 1( i), oerwise. Increase e value of r y 1 and go to Step 1. he most often selected infeasile variale is chosen y e pivot rule of e algorim in Step 1. Using exactly e same rule in Step 2 from e candidate variales, e most-often selected is chosen again. If we have more an one candidate eier in Step 1 (elements of J max ) or in Step 2 (in case (a) e elements of K P, max and in case () e elements of K D, max ) en we may choose from em aritrarily. he finiteness of e algorim will e proved using e orogonality eorem (Proposition 1.5). In e case of linear programming, e vectors t ( i) R n+1 and t j R n+1, furermore space of e following matrix (i) t elongs to e row space, while t j elongs to e null A c. 0 From now on we assume at e index c (which elongs to e row vector c ) is always in e set of. Lemma 3.2. he Algorim 3.1. is finite. Proof: he proof of is lemma is very similar to e proof of Lemma 2.2. Let us assume to e contrary, at e Algorim 3.1. is not finite. ut e numer of possile ases is finite, erefore at least one asis should e repeated infinitely many times. hus cycling must occur. From ose examples where cycling occurs choose one wi e smallest size, which means all e variales enter and leave e asis during a cycle.

11 . IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 25 Let x q e e variale which moves last and A e first asis when changes its asic status. (Wiout loss of generality we may assume at asis A.) Let us denote y A at asis when x q enters at x q moves for e second time. We assume at after e asis A, all e variales have changed eir asic status at least once. It may happen at anoer variale x w, togeer wi x q, changes its asic status at A for e first time. We now have e following cases: + * xw (3a) (3) Figure 3: At primal iteration x q enters e asis and (a) change its asic status in Step 1.; () o x q is e only candidate to x q and x w change eir asic status for e first time If 0 en (3a) and (3) are equivalent taleaus. w xw (4a) xw * (4) Figure 4: At dual iteration x q enters e asis: (a) x q has een selected uniquely; () o x q and x w change eir asic status for e first time In (4a) and (4) e sign structure of e row of asis when e variale x w is e same. A is e x leaves e asis for e first time. We have two cases: q leaves e asis eier in Step 2 (a), primal iteration, or in Step 2 (), dual iteration.

12 26. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results xs (5a) (5) Figure 5: he variale x q leaves e asis A at primal (5a) or at dual (5) iteration If t > 0 and i \{ q} (see Fig. 5, part (5a)) en, according to e pivot rule is of Algorim 3.1, vr 1 ( i) = vr1 ( q) = 1, us i. Similarly, if t ci < 0 ( t j < 0), where i N ( j \{ q}) en i N ( j ) holds ecause v ( i) v ( q) = ( v ( j = = v r ( q) 1) using e case (5). 1 = r 1 = r1 1 r1 ) Now we have e following four possile cases, namely: e variale e asis A at a) primal iteration and leaves e asis A at primal iteration; ) primal iteration and leaves e asis A at dual iteration; c) dual iteration and leaves e asis A at primal iteration; d) dual iteration and leaves e asis A at dual iteration. x q enters Let us deal wi e case (a), where we have e sign structures given at (3a) (c) (or (3)) and (5a). ecause is e same vector for o (3a) and (3) we do not need to separate ese two sucases. From e pivot taleau shown on (5a) we use e vector t. We know at (t ) t = 0, according to e orogonality eorem s s (Proposition 1.5). aking into consideration e signs of e entries in especially if t is > 0, i \{ q} en i, us t = 0, we have = s cq qs cc cs 0 ( ) t t + < 0. ci (c) and t s,

13 . IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 27 he last (strict) inequality holds ecause t < 0, t < 0, = 1 and t > 0. hus we cq cs cc have otained a contradiction, namely (3a) (or (3)) cannot occur in e sequence of pivot taleaus produced y e algorim togeer wi e taleau shown in (5a). Case (). Let us consider e vectors (c) and from e pivot taleau (3a), while e (c) vectors t and t are from e pivot taleau (5). Applying e orogonality eorem (Proposition 1.5) twice and summing e terms we get ( c) + ( t ) t ( t ) t previous expression in more detail, us = 0. Using e remark given after Figure 5, we can compute e qs 0 = ( ) t + ( ) + t + t cc c c = ζ ζ + t cq q cq q + cc c + ζ ζ > 0, + c + t cq q (2) ecause t = = 1, = t = 1, = t = 0, = ζ, = ζ, < 0 and t < 0. cc cc q cq c cq cq herefore o pivot taleaus (3a) and (5) cannot occur. Now we need to pay more attention to e case when pivot taleaus (3) and (5) are considered. In expression (2) e term t appears. Unfortunately, we have no cw tw information aout e sign of e element w (see Fig. (3)). he element w can e o negative and nonnegative. Furermore, it may happen at e element t cw is negative or nonnegative. herefore we have four sucases depending on e sign of w and t cw. If t 0 and t 0 en e proof goes along e same lines as for e w taleaus (3a) and (5). cw If t cw < 0 en let us consider e vectors q and orogonality eorem (Proposition 1.5) we have ( t ) = 0. hen = q cw wq cq qq c q cw wq 0 (t ) + t + = t < 0 q q (c) t. Using e where t = 0 and t = 0, ecause q and N, furermore t < 0 and t wq cq q > 0. hus we have a contradiction in is sucase. Let us now consider e sucase when t < 0 and t 0. In is situation cycling may occur in two different ways: (i) if e variales asic status at e asis enters a asis coming after w A, or (ii) if e variale A. cw x q and x q leaves e asis cw x w change eir A and xw

14 28. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results Let us analyze first e case (i). Now, e vectors he sign structure of (q) t has e following properties: (q) t and are considered. t < 0, t < 0 and if t < 0 and i w en i N, namely t = 0. q qw qi From e orogonality eorem (Proposition 1.5) we know at ( t ) t = 0 and using e previous information we may compute in more details as follows ( q) = qw w q qq q qw w q 0 ( t ) + t + t = t t > 0, i ( q) ecause t < 0, < 0, < 0, = 1, = 1 and t = 0. hus we have otained qw w q qq contradiction once more. Now, we need to analyze e case (ii), us we take into consideration e first asis A after A such at x w enters e asis and x q is a nonasic variale at A 8. he sign structure of A is q x w and ecause Furermore, if q N en according to e pivot rule we have t 0, < 0. < 0, i w en i N, erefore t ci = q cw wq cc cq 0 ( t ) t + < 0, since t = 1 and t < 0. hus a contradiction is otained. cc cq cq cw (a). After is complicated case let us analyze (c) and (d), which are similar to case In case (c), we consider e vectors (w) and t s. Using e orogonality eorem (Proposition 1.5) and e sign structure of e vectors we have 8 he existence of such asis A is necessary to get a cycle, ecause at A e variale xw was in e asis, while x q was out of e asis.

15 . IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results 29 ( w) = s wq qs w s wc cs wq qs 0 ( ) t + t + t = < 0, where t = = 0, < 0 and t > 0. hus a contradiction is otained. s wc wq In e last case, (d), we consider e vectors orogonality of e vectors (proved in Proposition 1.5) we get ( w) = wq q w wq q w 0 ( ) t + t = t > 0, qs (w) and t and instead of e since t < 0, < 0 and t < 0. his contradiction shows at case (d) cannot occur, as well. wq q w his completes e proof, ecause none of e possile cases can occur. Now we are ready to prove e strong duality eorem. Proof of e strong duality eorem (heorem 1.2): he two statements of e eorem exclude each oer. Let us apply Algorim 3.1. for e linear programming prolem (P). he algorim terminates wi one of e following cases: 1. Variale x k leaves e current asis and we cannot choose any variale to enter e asis (Algorim 3.1, Step 2 ()). hen we have x < 0 and t 0, for all i I us P = 0/. ki k = k 2. Variale x k enters e current asis and we cannot choose any variale to leave e asis (Algorim 3.1, Step 2 (a)). hen we have c < 0 and t 0, for all i us D = 0/. 3. According to e pivot rule of Algorim 3.1. we cannot choose a variale eier to leave or to enter e current asis, us x 0 and c 0, for all i I (Step 1., J = 0/ ). herefore e current asis is optimal, so an optimal solution is found to e primal prolem. It is ovious at if 1 or 2 occurs en e statement (2) of e strong duality eorem is otained. For is, we only need to show at statement 1 and 2 are true. Statement 1 is proved during e verification of e Farkas Lemma (see (1)). Statement 2 9 can e verified as follows. Let us assume to e contrary, at ere exists a dual feasile asis ut A for which t 0 has to e orogonal to e vector t 0, (c) = k cc ck c k ck 0 ( t ) t + t = < 0, i k i ik k 9 Known in e literature as dual infeasiility criteria, see for instance [9, 10, 5, 6].

16 30. IllÕs, K. MÕszÃros / A New and Constructive Proof of wo asic Results k cc ecause t = 0, t = 1 and t < 0, gives a contradiction. ck Statement 3 is true ecause if we denote e asic feasile solution, produced y e algorim, y ˆ 1 x ( A, 0) where 1 A 0, and ˆ 1 y = ( c A ) en c xˆ = c 1 = A = yˆ. Now applying e weak duality eorem of linear programming [9, 10, 5, 6], we may show e optimality of xˆ and ŷ, us statement (1) of e strong duality eorem is otained if Algorim 3.1. stops in Step 1. his completes e proof of e strong duality eorem. Acknowledgement. he auors are grateful to Filiz Erilen for reading an earlier version of is paper and making useful comments at improved e presentation of is paper. REFERENCES [1] alinski, M.L., and ucker, A.W., "Duality eorem of linear programs: A constructive approach wi applications", SIAM Review, 11 (1969) [2] Farkas, Gy., "Applications of Fourier s maematical principle", Maematikai Òs ermòszettudomãnyi èrtesú, 12 (1894) (in Hungarian). [3] Farkas, J., "heorie der einfachen ungleichungen", Journal f r dei Reine und Angewandte Maematik, 124 (1901) [4] Fakuda, K, and erlaky,, "Criss-cross meod: A fresh view on pivot algorims", Maematical Programming, 79 (1997) [5] IllÒs,., Pivot Algorims of Linear Programming, E tv s LorÃnd, University of Sciences, Faculty of Natural Sciences, udapest, Lecture Notes, 1998 (in Hungarian). [6] IllÒs., Linear Programming, Eastern Mediterranean University, Faculty of Arts and Sciences, Famagusta, Lecture Notes, [7] Klafszky, E., and erlaky,., "Application of pivot technique in proving some asic eorems of linear algera", Alkamozott Matematikai Lapok, 14 (1989) (in Hungarian). [8] Klafszky, E., and erlaky,., "A role of pivoting in proving some fundamental eorems of linear algera", Linear Algera and its Application, 151 (1991) [9] Murty, K.G., Linear Programming, John Wiley & Sons, [10] PrÒkopa, A., Linear Programming, JÃnos olyai Maematical Society, udapest, 1968 (in Hungarian). [11] erlaky,., "A new, finite criss-cross meod for solving linear programming prolems", Alkamozott Matematikai Lapok, 10 (1983) (in Hungarian). [12] erlaky,., "A convergent Criss-cross meod", Ma. Oper. und Stat. Ser. Optimization, 16 (1985) [13] erlaky,., "A finite criss-cross meod and its applications", MA SZAKI anulmãnyok, udapest, 179 (1986) (in Hungarian). [14] Zhang, S.A., "A new variant criss-cross pivot algorims for linear programming ", echnical Report 9707/A, Econometric Institute, Erasmus University Rotterdam, [15] Zhang, S.A., "A new variant criss-cross pivot algorims for linear programming ", European Journal of Operations Research, 116 (1999)

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T Our market gardener eample had the form: min - 900 1500 [ ] suject to: Ñ Ò Ó 1.5 2 â 20 60 á ã Ñ Ò Ó 3 â 60 á ã where: [ acres caages acres tomatoes T ]. We need a more systematic approach to solving these

More information

IN this paper we study a discrete optimization problem. Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach

IN this paper we study a discrete optimization problem. Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach Ying Xiao, Student Memer, IEEE, Krishnaiyan Thulasiraman, Fellow, IEEE, and Guoliang Xue, Senior Memer, IEEE Astract

More information

1 Implementation (continued)

1 Implementation (continued) Mathematical Programming Lecture 13 OR 630 Fall 2005 October 6, 2005 Notes by Saifon Chaturantabut 1 Implementation continued We noted last time that B + B + a q Be p e p BI + ā q e p e p. Now, we want

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

(includes both Phases I & II)

(includes both Phases I & II) (includes both Phases I & II) Dennis ricker Dept of Mechanical & Industrial Engineering The University of Iowa Revised Simplex Method 09/23/04 page 1 of 22 Minimize z=3x + 5x + 4x + 7x + 5x + 4x subject

More information

1.17 Triangle Numbers

1.17 Triangle Numbers .7 riangle Numers he n triangle numer is nn ( ). he first few are, 3,, 0, 5,, he difference etween each numer and e next goes up y each time. he formula nn ( ) gives e ( n ) triangle numer for n. he n

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

(includes both Phases I & II)

(includes both Phases I & II) Minimize z=3x 5x 4x 7x 5x 4x subject to 2x x2 x4 3x6 0 x 3x3 x4 3x5 2x6 2 4x2 2x3 3x4 x5 5 and x 0 j, 6 2 3 4 5 6 j ecause of the lack of a slack variable in each constraint, we must use Phase I to find

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Determinants of generalized binary band matrices

Determinants of generalized binary band matrices Determinants of generalized inary and matrices Dmitry Efimov arxiv:17005655v1 [mathra] 18 Fe 017 Department of Mathematics, Komi Science Centre UrD RAS, Syktyvkar, Russia Astract Under inary matrices we

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

SVETLANA KATOK AND ILIE UGARCOVICI (Communicated by Jens Marklof)

SVETLANA KATOK AND ILIE UGARCOVICI (Communicated by Jens Marklof) JOURNAL OF MODERN DYNAMICS VOLUME 4, NO. 4, 010, 637 691 doi: 10.3934/jmd.010.4.637 STRUCTURE OF ATTRACTORS FOR (a, )-CONTINUED FRACTION TRANSFORMATIONS SVETLANA KATOK AND ILIE UGARCOVICI (Communicated

More information

#A50 INTEGERS 14 (2014) ON RATS SEQUENCES IN GENERAL BASES

#A50 INTEGERS 14 (2014) ON RATS SEQUENCES IN GENERAL BASES #A50 INTEGERS 14 (014) ON RATS SEQUENCES IN GENERAL BASES Johann Thiel Dept. of Mathematics, New York City College of Technology, Brooklyn, New York jthiel@citytech.cuny.edu Received: 6/11/13, Revised:

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res. SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD FOR QUADRATIC PROGRAMMING Emil Klafszky Tamas Terlaky 1 Mathematical Institut, Dept. of Op. Res. Technical University, Miskolc Eotvos University Miskolc-Egyetemvaros

More information

An Algorithm to Solve the Games under Incomplete Information

An Algorithm to Solve the Games under Incomplete Information Annals of Pure and Applied Maematics Vol. 10, No.2, 2015, 221-228 ISSN: 2279-087X (P), 2279-0888(online) Published on 30 October 2015 www.researchmasci.org Annals of An Algorim to Solve e Games under Incomplete

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Some Construction Methods of Optimum Chemical Balance Weighing Designs II

Some Construction Methods of Optimum Chemical Balance Weighing Designs II Journal of Emerging Trends in Engineering and Applied Sciences (JETEAS) 5(): 39-44 Scholarlin Research Institute Journals, 4 (ISS: 4-76) jeteas.scholarlinresearch.org Journal of Emerging Trends in Engineering

More information

Some Construction Methods of Optimum Chemical Balance Weighing Designs III

Some Construction Methods of Optimum Chemical Balance Weighing Designs III Open Journal of Statistics, 06, 6, 37-48 Published Online February 06 in SciRes. http://www.scirp.org/journal/ojs http://dx.doi.org/0.436/ojs.06.6006 Some Construction Meods of Optimum Chemical Balance

More information

GYULA FARKAS WOULD ALSO FEEL PROUD

GYULA FARKAS WOULD ALSO FEEL PROUD GYULA FARKAS WOULD ALSO FEEL PROUD PABLO GUERRERO-GARCÍA a, ÁNGEL SANTOS-PALOMO a a Department of Applied Mathematics, University of Málaga, 29071 Málaga, Spain (Received 26 September 2003; In final form

More information

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables Minimizing a convex separale exponential function suect to linear equality constraint and ounded variales Stefan M. Stefanov Department of Mathematics Neofit Rilski South-Western University 2700 Blagoevgrad

More information

3.3 Sensitivity Analysis

3.3 Sensitivity Analysis 26 LP Basics II 3.3 Sensitivity Analysis Analyse the stability of an optimal (primal or dual) solution against the (plus and minus) changes of an coefficient in the LP. There are two types of analyses

More information

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems Zsolt Csizmadia and Tibor Illés and Adrienn Nagy February 24, 213 Abstract In this paper we introduce the

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Upper Bounds for Stern s Diatomic Sequence and Related Sequences

Upper Bounds for Stern s Diatomic Sequence and Related Sequences Upper Bounds for Stern s Diatomic Sequence and Related Sequences Colin Defant Department of Mathematics University of Florida, U.S.A. cdefant@ufl.edu Sumitted: Jun 18, 01; Accepted: Oct, 016; Pulished:

More information

5 The Primal-Dual Simplex Algorithm

5 The Primal-Dual Simplex Algorithm he Primal-Dual Simplex Algorithm Again, we consider the primal program given as a minimization problem defined in standard form his algorithm is based on the cognition that both optimal solutions, i.e.,

More information

On Co-Positive Approximation of Unbounded Functions in Weighted Spaces

On Co-Positive Approximation of Unbounded Functions in Weighted Spaces On Co-Positive Approximation of Unbounded Functions in Weighted Spaces Alaa A Auad 1 and Alaa M FAL Jumaili 2 1 Department of Maematics, College of Education for pure Science, University of Anbar, Al-Ramadi

More information

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3 IE 4 Principles of Engineering Management The Simple Algorithm-I: Set 3 So far, we have studied how to solve two-variable LP problems graphically. However, most real life problems have more than two variables!

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO350 Linear Programming Chapter 6: The Simplex Method 8th June 2005 Chapter 6: The Simplex Method 1 Minimization Problem ( 6.5) We can solve minimization problems by transforming it into a maximization

More information

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 13 Dr. Ted Ralphs IE406 Lecture 13 1 Reading for This Lecture Bertsimas Chapter 5 IE406 Lecture 13 2 Sensitivity Analysis In many real-world problems,

More information

1. Introduction and background. Consider the primal-dual linear programs (LPs)

1. Introduction and background. Consider the primal-dual linear programs (LPs) SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2.

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2. Answers to problems Chapter 1 1.1. (0, 0) (3.5,0) (0,4.5) (, 3) Chapter.1(a) Last tableau X4 X3 B /5 7/5 x -3/5 /5 Xl 4/5-1/5 8 3 Xl =,X =3,B=8 (b) Last tableau c Xl -19/ X3-3/ -7 3/4 1/4 4.5 5/4-1/4.5

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Genetic Algorithms applied to Problems of Forbidden Configurations

Genetic Algorithms applied to Problems of Forbidden Configurations Genetic Algorithms applied to Prolems of Foridden Configurations R.P. Anstee Miguel Raggi Department of Mathematics University of British Columia Vancouver, B.C. Canada V6T Z2 anstee@math.uc.ca mraggi@gmail.com

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Root-Patterns to Algebrising Partitions

Root-Patterns to Algebrising Partitions Advances in Pure Maematics, 015, 5, 31-41 Published Online January 015 in SciRes. http://www.scirp.org/journal/apm http://dx.doi.org/10.436/apm.015.51004 Root-Patterns to Algebrising Partitions Rex L.

More information

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.: ,-..., -. :', ; -:._.'...,..-.-'3.-..,....; i b... {'.'',,,.!.C.,..'":',-...,'. ''.>.. r : : a o er.;,,~~~~~~~~~~~~~~~~~~~~~~~~~.'. -...~..........".: ~ WS~ "'.; :0:_: :"_::.:.0D 0 ::: ::_ I;. :.!:: t;0i

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Shifting Inequality and Recovery of Sparse Signals

Shifting Inequality and Recovery of Sparse Signals Shifting Inequality and Recovery of Sparse Signals T. Tony Cai Lie Wang and Guangwu Xu Astract In this paper we present a concise and coherent analysis of the constrained l 1 minimization method for stale

More information

On Worley s theorem in Diophantine approximations

On Worley s theorem in Diophantine approximations Annales Mathematicae et Informaticae 35 (008) pp. 61 73 http://www.etf.hu/ami On Worley s theorem in Diophantine approximations Andrej Dujella a, Bernadin Irahimpašić a Department of Mathematics, University

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is

More information

Downloaded from iors.ir at 10: on Saturday May 12th 2018 Fuzzy Primal Simplex Algorithms for Solving Fuzzy Linear Programming Problems

Downloaded from iors.ir at 10: on Saturday May 12th 2018 Fuzzy Primal Simplex Algorithms for Solving Fuzzy Linear Programming Problems Iranian Journal of Operations esearch Vol 1, o 2, 2009, pp68-84 Fuzzy Primal Simplex Algorithms for Solving Fuzzy Linear Programming Problems ezam Mahdavi-Amiri 1 Seyed Hadi asseri 2 Alahbakhsh Yazdani

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

Linear programming: theory, algorithms and applications

Linear programming: theory, algorithms and applications Linear programming: theory, algorithms and applications illes@math.bme.hu Department of Differential Equations Budapest 2014/2015 Fall Vector spaces A nonempty set L equipped with addition and multiplication

More information

Pseudo-automata for generalized regular expressions

Pseudo-automata for generalized regular expressions Pseudo-automata for generalized regular expressions B. F. Melnikov A. A. Melnikova Astract In this paper we introduce a new formalism which is intended for representing a special extensions of finite automata.

More information

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Dual Basic Solutions Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Observation 5.7. AbasisB yields min c T x max p T b s.t. A x = b s.t. p T A apple c T x 0 aprimalbasicsolutiongivenbyx

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Strong Duality: Without Simplex and without theorems of alternatives. Somdeb Lahiri SPM, PDPU. November 29, 2015.

Strong Duality: Without Simplex and without theorems of alternatives. Somdeb Lahiri SPM, PDPU.   November 29, 2015. Strong Duality: Without Simplex and without theorems of alternatives By Somdeb Lahiri SPM, PDPU. email: somdeb.lahiri@yahoo.co.in November 29, 2015. Abstract We provide an alternative proof of the strong

More information

A SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE

A SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE K Y B E R N E I K A V O L U M E 4 4 ( 2 0 0 8 ), N U M B E R 2, P A G E S 2 4 3 2 5 8 A SECOND ORDER SOCHASIC DOMINANCE PORFOLIO EFFICIENCY MEASURE Miloš Kopa and Petr Chovanec In this paper, we introduce

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

SU(N) representations

SU(N) representations Appendix C SU(N) representations The group SU(N) has N 2 1 generators t a which form the asis of a Lie algera defined y the commutator relations in (A.2). The group has rank N 1 so there are N 1 Casimir

More information

1 Seidel s LP algorithm

1 Seidel s LP algorithm 15-451/651: Design & Analysis of Algorithms October 21, 2015 Lecture #14 last changed: November 7, 2015 In this lecture we describe a very nice algorithm due to Seidel for Linear Programming in lowdimensional

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE Richard W. Cottle Department of Management Science and Engineering Stanford University ICCP 2014 Humboldt University Berlin August, 2014 1 / 55 The year

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

Termination, Cycling, and Degeneracy

Termination, Cycling, and Degeneracy Chapter 4 Termination, Cycling, and Degeneracy We now deal first with the question, whether the simplex method terminates. The quick answer is no, if it is implemented in a careless way. Notice that we

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

LINEAR INTERVAL INEQUALITIES

LINEAR INTERVAL INEQUALITIES LINEAR INTERVAL INEQUALITIES Jiří Rohn, Jana Kreslová Faculty of Mathematics and Physics, Charles University Malostranské nám. 25, 11800 Prague, Czech Republic 1.6.1993 Abstract We prove that a system

More information

LINEAR PROGRAMMING II

LINEAR PROGRAMMING II LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality

More information

1 Systems of Differential Equations

1 Systems of Differential Equations March, 20 7- Systems of Differential Equations Let U e an open suset of R n, I e an open interval in R and : I R n R n e a function from I R n to R n The equation ẋ = ft, x is called a first order ordinary

More information

Packing Congruent Bricks into a Cube

Packing Congruent Bricks into a Cube Journal for Geometry and Graphics Volume 5 (2001), No. 1, 1 11. Packing Congruent Bricks into a Cube Ákos G. Horváth, István Prok Department of Geometry, Budapest University of Technology and Economics

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

NOTE. A Note on Fisher's Inequality

NOTE. A Note on Fisher's Inequality journal of cominatorial theory, Series A 77, 171176 (1997) article no. TA962729 NOTE A Note on Fisher's Inequality Douglas R. Woodall Department of Mathematics, University of Nottingham, Nottingham NG7

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

Application of Independent Variables Transformations for Polynomial Regression Model Estimations

Application of Independent Variables Transformations for Polynomial Regression Model Estimations 3rd International Conference on Applied Maematics and Pharmaceutical Sciences (ICAMPS'03) April 9-30, 03 Singapore Application of Independent Variables Transformations for Polynomial Regression Model Estimations

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Creating Rational Equations that have R ational Solutions. By Alan Roebuck Chaffey College

Creating Rational Equations that have R ational Solutions. By Alan Roebuck Chaffey College Creating Rational Equations that have R ational Solutions aversion Î /' y Alan Roeuck Chaffey College alanroeuck@chaffeyedu th Page )'! of Elementary and Intermediate Algera & edition y Tussy and Gustafson

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Quivers. Virginia, Lonardo, Tiago and Eloy UNICAMP. 28 July 2006

Quivers. Virginia, Lonardo, Tiago and Eloy UNICAMP. 28 July 2006 Quivers Virginia, Lonardo, Tiago and Eloy UNICAMP 28 July 2006 1 Introduction This is our project Throughout this paper, k will indicate an algeraically closed field of characteristic 0 2 Quivers 21 asic

More information

Solving Systems of Linear Equations Symbolically

Solving Systems of Linear Equations Symbolically " Solving Systems of Linear Equations Symolically Every day of the year, thousands of airline flights crisscross the United States to connect large and small cities. Each flight follows a plan filed with

More information

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS 1Numer systems: real and complex 1.1 Kick off with CAS 1. Review of set notation 1.3 Properties of surds 1. The set of complex numers 1.5 Multiplication and division of complex numers 1.6 Representing

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information