IN this paper we study a discrete optimization problem. Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach

Size: px
Start display at page:

Download "IN this paper we study a discrete optimization problem. Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach"

Transcription

1 Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach Ying Xiao, Student Memer, IEEE, Krishnaiyan Thulasiraman, Fellow, IEEE, and Guoliang Xue, Senior Memer, IEEE Astract Given a graph G with each link in the graph associated with two positive weights, cost and delay, we consider the prolem of selecting a set of k link-disjoint paths from a node s to another node t such that the total cost of these paths is minimum and that the total delay of these paths is not greater than a specified ound. This prolem, to e called the CSDP(k) prolem, can e formulated as an integer linear programming (ILP) prolem. Relaxing the integrality constraints results in an upper ounded linear programming prolem. We first show that the integer relaxations of the CSDP(k) prolem and a generalized version of this prolem to e called the GCSDP(k) prolem (in which each path is required to satisfy a specified ound on its delay) oth have the same optimal ojective value. In view of this we focus our work on the relaxed form of the CSDP(k) prolem called RELAX-CSDP(k). We study RELAX- CSDP(k) from the primal perspective using the revised simplex method of linear programming. We discuss different issues such as formulas to identify entering and leaving variales, anticycling strategy, computational time complexity etc., related to an efficient implementation of our approach. We show how to extract from an optimal solution to RELAX-CSDP(k) a set of k link-disjoint s-t paths which is an approximate solution to the original CSDP(k) prolem. We also derive ounds on the quality of this solution with respect to the optimum. We present simulation results that demonstrate that our algorithm is faster than currently availale approaches. Our simulation results also indicate that in most cases the individual delays of the paths produced starting from RELAX-CSDP(k) do not deviate in a significant way from the individual path delay requirements of the GCSDP(k) prolem. Index Terms Constrained shortest paths, link-disjoint paths, QoS routing, graph theory, cominatorial optimization, linear programming, network optimization I. INTRODUCTION IN this paper we study a discrete optimization prolem defined on graphs or networks (We use the terms graph and network interchangealy). Specifically, we are interested in selecting a set of paths satisfying certain constraints. This prolem is fundamental and arises in several applications. In this context we encounter two prolems. One of them, called the Constrained Shortest Path (CSP) prolem, requires the determination of a minimum cost path from a source node s to a destination node t such that the delay of the path is within a specified ound. The other prolem, denoted as CSDP(k), The work of K. Thulasiraman has een supported y NSF ITR grant ANI The work of G. Xue has een supported y NSF ITR grant ANI Ying Xiao and Krishnaiyan Thulasiraman are with University of Oklahoma, Norman, OK 739, USA ( ying xiao@ou.edu, thulsi@ou.edu). Guoliang Xue is with Arizonia State University, Tempe, AZ 85287, USA. ( xue@asu.edu) is to select a set of k link disjoint paths from s to t such that the total cost of these paths is minimum and that the total delay of these paths is not greater than a specified ound. Both prolems are NP-hard [], [2]. This has led researchers to propose heuristics and approximation algorithms for these prolems. For a detailed account of the different algorithms for the CSP prolem, [2] [3] may e consulted. Of special interest to us in the context of our work in this paper are [8], [] [3]. References [8] and [] [2] present the LARAC algorithm that is ased on the Lagrangian dual of the CSP prolem as well as some generalizations. Reference [] presents a generalization of the LARAC algorithm and is applicale to general cominatorial optimization prolems involving two additive metrics. One of such prolem is the minimum cost spanning tree prolem with the restriction that the total delay eing within a specified limit. Several such constrained discrete optimization prolems arise in the VLSI physical design area. Reference [3] shows that these algorithms are strongly polynomial. In perhaps the most recent work [4] on the CSP prolem, we have studied this prolem from a primal perspective. The CSDP(k) prolem arises in the context of provisioning paths that could e used to provide resilience to failures in one or more of these paths. Orda et al. [5] have studied the CSDP(2) prolem and have provided several approximation algorithms. A special case of the CSDP(k) prolem which does not have the delay requirement has een studied in [6]. The algorithms in [] and [6] can e integrated to provide an approximate solution to the CSDP(k) prolem. We call this the G-LARAC(k) algorithm. The rest of the paper is organized as follows. In Section II we define the CSDP(k) prolem and a generalized version of this prolem called the GCSDP(k) prolem. The GCSDP(k) prolem requires that the delay of each path in the set of link-disjoint paths e ounded y a specified value. This is in contrast to the CSDP(k) prolem wherein the delay constraint is with respect to the total delay of the paths. However, even finding two delay constrained link-disjoint paths is NP-hard and is not approximale within a factor of 2 ɛ for any ɛ > [7]. We first show that the optimal ojective values of the LP relaxations of these two prolems have equal value. Hence we focus our study on the relaxed version of the CSDP(k) prolem, namely, the RELAX-CSDP(k) prolem. In Section III we review the G-LARAC(k) algorithm which is a dual ased approach to solving RELAX-CSDP(k). In Section IV we introduce a transformation on the CSDP(k) prolem. The

2 2 transformed prolem will e called the TCSDP(k) prolem. We show that the CSDP(k) prolem and the TCSDP(k) prolem are equivalent. As we show later in the paper the transformed prolem has several properties that enale us to achieve an efficient implementation of our approach. In the remainder of the paper we study the LP relaxation of the TCSDP(k) prolem, namely, RELAX-TCSDP(k), using the revised simplex method of linear programming. In Section V, several properties of asic solutions of RELAX-TCSDP(k) are estalished. We also show how to extract an approximate solution to the CSDP(k) prolem starting from an optimal solution to RELAX-TCSDP(k). In Sections VI-VII, the revised simplex method and several issues relating to an efficient implementation are discussed. We also develop an anti-cycling strategy and estalish the pseudo-polynomial time complexity of the revised simplex method when applied on RELAX- TCSDP(k). Simulation results comparing our approach with the G-LARAC(k) algorithm and the commercially availale CPLEX package are presented in Section VIII. These results demonstrate that our algorithm is faster than currently availale approaches. They also indicate that in most cases the individual delays of the paths produced starting from RELAX-CSDP(k) do not deviate in a significant way from the individual delay requirements of the GCSDP(k) prolem, therey demonstrating that there is not much loss of generality in focusing on RELAX-CSDP(k) rather than on the relaxed version of the more complex GCSDP(k) prolem. We conclude in Section IX with a summary of our work and pointing to certain directions for future research. II. CONSTRAINED SHORTEST LINK-DISJOINT PATHS SELECTION PROBLEMS: FORMULATIONS, RELAXATIONS, AND THEIR EQUIVALENCE In this section we first define two classes of link-disjoint paths selection prolems. One is a special case of the other. They oth admit integer linear programming (ILP) formulations. They are computationally intractale ecause of the integrality constraints. For networks involving small numers of nodes and links, these prolems can e solved using any general purpose ILP package. For larger networks, faster algorithms are desired. So, we are interested in solving these prolems after relaxing the integrality requirement and exploiting the special network structure of these prolems for efficient algorithms. The relaxed versions of these prolems are upper ounded linear programming prolems. The main result in this section is that the relaxed versions of oth prolems are equivalent in the sense they have the same optimal ojective value. We egin with some asic definitions. The network is modeled as a directed graph G(V, E), where V and E are the sets of nodes and links, respectively. Each link (u, v) E is associated with a positive integer cost c uv and a positive integer delay d uv. A path is a sequence of distinct nodes u, u 2..., u l such that either (u i, u i+ ) E or (u i+, u i ) E for all i l and u i u j if i j. A link on a path from u to v is a forward link (ackward) link if its orientation agrees (disagrees) with the direction of the traversal of the path from u to v. The sets of forward and ackward links on p will e denoted y p + and p, respectively. For any directed path p (or cycle with given orientation) define cost c(p) and delay d(p) of p as c(p) = c uv c uv, (u,v) p + (u,v) p d(p) = d uv d uv. (u,v) p + (u,v) p Given any two nodes s and t, an s-t path is a directed s-t path if all the links on the path are forward links. For the simplicity, we shall call such directed paths simply as s-t paths. An s-t path is called feasile with respect to the delay and a specified value T if the delay of the path is at most T. Without loss of generality we assume that for every node u there is a directed path from s to u and a directed path from u to t. General Constrained Shortest k-disjoint Paths (GCSDP(k)) Prolem: Given two nodes s and t and a positive integer T, the GCSDP(k) prolem is to find a set of k(k 2) link-disjoint s-t paths p, p 2..., p k such that the delay of each path p i is at most T and the total cost of the k paths is minimum. Constrained Shortest k-disjoint Paths (CSDP(k)) Prolem: Given two nodes s and t, and a positive integer T, the CSDP(k) prolem is to find a set of k link disjoint s-t paths p, p 2..., p k such that the total delay of these paths is at most kt and that the total cost of the k paths is minimum. Both the aove prolems can e formulated as ILP prolems. Relaxing the integrality constraints we get the following relaxed versions of these prolems. RELAX-GCSDP(k): Minimize suject to {v } c uv k x i uv () i= For i =, 2... k and u V,, for u = s x i uv x i vu =, for u = t {v (v,u) E}, otherwise (2) d uv x i uv T (3) k x i uv and x i uv, (u, v) E (4) i= The solutions to the aove prolem may not, in general, e integral. However, every integer solution defines a set of k link-disjoint s-t paths. In other words, an integer solution X i = {x i uv} for i =, 2..., k is the flow vector corresponding to the ith path p i, i.e., link (u, v) is on path p i iff x i uv =.

3 3 RELAX-CSDP(k): Minimize c uv x uv (5) suject to u V, {v } x uv {v (v,u) E} k, for u = s x vu = k, for u = t, otherwise (6) d uv x uv kt (7) x uv, (u, v) E We now proceed to show that the RELAX-GCSDP(k) and RELAX-CSDP(k) are equivalent in the sense that they oth have optimal solutions with the same value for the ojective. Let Λ = (, 2..., k ) and define L G (k, Λ) = min{ + k i ( i= c uv k i= x i uv d uv x i uv T )} Then the Lagrangian dual of RELAX-GCSDP(k) is as follows. LAGRANGIAN-GCSDP(k): Maximize L G (k, Λ), among all Λ suject to i =, 2... k and u V,, for u = s x i uv x i vu =, for u = t {v } {v (v,u) E}, otherwise (8) k x i uv and x i uv, (u, v) E (9) i= The vector Λ is called the Lagrangian multiplier. The aove prolem can e solved y finding the Lagrangian multiplier vector Λ that maximizes L G (k, Λ). Property : Given any Λ = (..., k ), let Λ e otained y permuting the components of Λ. Then L G (k, Λ) = L G (k, Λ ). Property 2: L G (k, Λ) is a concave function of Λ [8]. Property 3: There exists Λ with all components equal that maximizes L G (k, Λ). By Property 3, L G (k, Λ) can e reformulated with respect to some Λ = (,..., ) as follows: L G (k, Λ) = min{ + ( c uv k i= x i uv k (d uv x i uv) kt )}. () i= Let x uv = k x i uv, (u, v) E. () i= We now define UNIFORM-LAGRANGIAN-GCSDP(k) as follows. First let L(k, ) = min{ c uv x uv + ( d uv x uv kt )}. UNIFORM-LAGRANGIAN-GCSDP(k): Maximize L(k, ) among all (2) suject to {v } x uv {v (v,u) E} k, for u = s x vu = k, for u = t, otherwise (3) x uv, (u, v) E. (4) Note that (3) is otained y summing up the k flow alance constraints in (8) and that is a scalar. Theorem : UNIFORM-LAGRANGIAN-GCSDP(k) and LAGRANGIAN-GCSDP(k) have the same optimal value for the ojective. Proof: Let Λ = (,..., ). We first show that L G (k, Λ) L(k, ). Let {x i uv},i=...,k minimize L G (k, Λ). Then we have L G (k, Λ) = = + c uv k i= x i uv k ( i= c uv x uv + ( d uv x i uv T ) d uv x uv kt ) L(k, ), where x uv is defined as in (). It follows from the unimodularity [9] of the constraints (3)-(4) that for a given, there exists an optimal integer solution to UNIFORM-LAGRANGIAN-CSDP(k) prolem. Also an integer solution Y = {y uv } of UNIFORM- LAGRANGIAN-CSDP(k) that achieves the minimum in L(k, ) defines a set of k link-disjoint s-t paths P k = (p, p 2..., p k ). Let X i = {x i uv} e the flow vector for path p i, i.e., x i uv = iff (u, v) p i ; otherwise, x i uv =. Oserve that y uv = k L(k, ) = = i= xi uv. Then c uv y uv + ( c uv L G (k, Λ). k x i uv + i= d uv y uv kt ) k ( i= d uv x i uv T )

4 4 Hence, L(k, ) L G (k, Λ). So L(k, ) = L G (k, Λ). By Property 3 there exists a vector Λ = (,..., ) that maximizes L G (k, Λ). Let η e a maximizing multiplier for L(k, ) and denote H = (η..., η ). By definition of Λ and η, we have L G (k, Λ ) = L(k, ) L(k, η ) = L G (k, H ) L G (k, Λ ). Hence L G (k, Λ ) = L(k, η ). The aove theorem has an important implication. It shows that the optimal ojective to RELAX-GCSDP(k) can e otained y solving UNIFORM-LAGRANGIAN-GCSDP(k). But UNIFORM-LAGRANGIAN-GCSDP(k) is the general linear programming dual of the RELAX-CSDP(k) prolem (See page. 83 of [2]). Thus we have the following result y the strong duality theorem [2]. Theorem 2: RELAX-GCSDP(k) and RELAX-CSDP(k) have the same optimal ojective value. The intuition ehind the aove result is as follows. The indistinguishaility of the k path constraints represented y (3) guarantees that if P is a set of feasile paths constituting a solution to RELAX-GCSDP(k) prolem then any permutation of these paths is also a solution (Property ). Also in the optimal solution there is no reason for paths to e weighted differently (Property 3). As formally proved, these two properties lead to Theorem 2. Theorem 2 implies that if we are interested only in otaining the optimal ojective value of the RELAX-GCSDP(k), then starting with the RELAX-CSDP(k) does not result in any loss of generality. In view of this, we shall focus on RELAX- CSDP(k) in the rest of the paper. III. G-LARAC(k) ALGORITHM: A DUAL BASED APPROACH TO RELAX-CSDP(k) The G-LARAC(k) algorithm is a generalization of the LARAC algorithm [8] [] that was specifically designed for CSP prolem. The G-LARAC(k) algorithm may e viewed as an algorithm for solving RELAX-CSDP(k) prolem using its Lagrangian dual which is the same as UNIFORM- LAGRANGIAN-GCSDP(k) repeated elow. UNIFORM-LAGRANGIAN-GCSDP(k): Maximize L(k, ) suject to u V, {v } x uv {v (v,u) E} x uv, (u, v) E. k, for u = s x vu = k, for u = t, otherwise In the rest of the paper, we shall use in place of kt for the simplicity of writing. Given, L(k, ) is achieved y a set of k link-disjoint paths with minimum total weight, where the weight associated with link (u, v) is given y c uv + d uv. The key issue is how to search for the optimal that maximizes L(k, ) and determining the termination condition for the search. The G- LARAC(k) algorithm presented as Algorithm is one such efficient search procedure. In this procedure c cost of a path (also called aggregated cost) refers to the weight of the path computed using c uv + d uv as the weight of link (u, v). Algorithm G-LARAC(s, t, k, = kt ) algorithm {compute k-link disjoint paths with minimum total cost} P c Disjoint(s, t, c, k) if (d(p c ) ) then return P c {compute k-link disjoint paths with minimum total delay} P d Disjoint(s, t, d, k) if (d(p d ) > ) then return no solution loop (c(p c ) c(p d ))/(d(p d ) d(p c )) {compute k-link disjoint paths with minimum c cost} R Disjoint(s, t, c, k) if (c (R) = c (P c )) then return P d else if (d(r) ) then P d R else P c R end loop Basically G-LARAC(k) performs the following steps. ) In the first step, the algorithm calculates the minimum cost of a set of k link-disjoint s-t paths using link costs. This can e done y the algorithm in [6]. If the total delay of these paths is at most, this is surely the required set of paths. Otherwise, the algorithm stores this set as the latest infeasile set, simply called the P c set. Then it determines the minimum delay of a set of k link disjoint s-t paths, called the P d set. If P d is infeasile, there is no solution to this instance. 2) Set = (c(p c ) c(p d ))/(d(p d ) d(p c )). With this value of, we can find a set of k link-disjoint paths with minimum c -cost. Let this set e denoted as R. If c (R) = c (P c )(= c (P d )), we have otained the optimal. Otherwise, set R as the new P c or P d according to whether R is infeasile or feasile. A detailed discussion of several issues relating to G- LARAC(k) and properties of solutions produced y G- LARAC(k) may e found in [2]. We wish to note that when Lagrangian dual is introduced, an integrated metric is used y adding the penalty of extra delay into the cost function. However, this integration makes the new model deviate from the original prolem. For example, consider k =. If there is a path which has a delay less than, it will add a negative numer into the cost function as a reward. But in the original prolem, there is no such reward. One way to remedy this situation is to use the formulation min{c(p) + max{, d(p) }}. However, if we use this formulation we cannot reduce it to the shortest path prolem on c+d for fixed. This su-prolem is non-additive shortest path prolem. Finding this minimum is intractale and is as difficult as the original CSDP(k) prolem. In contrast to the dual approach taken y the G-LARAC(k) algorithm our interest in the remainder of the paper is to design an approach to otain an approximate solution to the CSDP(k) prolem using the primal simplex method of linear programming.

5 5 IV. TRANSFORMATION OF THE RELAX-CSDP(k) PROBLEM To achieve an efficient implementation of our approach to the RELAX-CSDP(k) prolem we consider another prolem TCSDP(k) on a transformed network defined as follows. ) The graph of the transformed prolem is the same as that of the original prolem, that is, G(V, E), 2) For all (u, v) E, d uv and c uv in the transformed prolem are given y d uv = 2d uv and c uv = c uv, and 3) The new upper ound in the transformed prolem is given y = 2 +. Let P k denote a set of k link-disjoint s-t paths. Let c(p k ) and d(p k ) denote the total cost and the total delay of the k paths in P k. The TCSDP(k) prolem asks for a set of k linkdisjoint s-t paths with minimum total cost and with total delay at most. Theorem 3: P k is a feasile solution (resp. an optimal solution) to the CSDP(k) prolem iff it is a feasile solution (resp. an optimal solution) to the TCSDP(k) prolem. In view of this equivalence we only consider, in the rest of the paper, the RELAX-TCSDP(k) prolem. Also we use (eing odd) and d uv (eing even) to denote the delay ound and link delay in the TCSDP(k) prolem. Notice that the transformation does not change the costs of paths. We conclude this section defining some terminology and presenting the RELAX-TCSDP(k) prolem in matrix form. Let the links e laeled as e, e 2..., e m and the nodes e laeled as, 2..., n. We shall denote the delay of each edge e i as d i and the cost of e i as c i. The incidence matrix of G has m columns, one for each link and n rows, one for each node [2]. The rank of this matrix is (n ), and removing any row of this matrix will result in a matrix of rank (n ). We denote this resulting matrix of rank (n ) as H. We also assume that the row removed from the incidence matrix corresponds to node n. We denote the column of H corresponding to e k y the vector h k. For e k = (i, j), i, j n we have h k = (h,k..., h i,k..., h j,k..., h n,k ) t with all its components eing except for h i,k = and h j,k =. Also for e k = (i, n), h i,k =, and for e k = (n, j), h j,k = and all the rest components are. Let D = ( d, d 2..., d m ) and H A = = (a D, a 2..., a m, a m+ ), (5) hi a i =, i m, and a d m+ =. (6) i Let x e the column vector of the m flow variales x uv and the slack variale w corresponding to the delay constraint (7), and c e the row vector (c, c 2..., c m, ) of the costs. Note that the cost of the slack variale is. Then the RELAX- TCSDP(k) prolem (see (5)-(7)) can e written in matrix form as follows. Note that to conform to the standard form for a minimization prolem we have used form of (7) and added a slack variale w, i.e., d uvx uv w. RELAX-TCSDP(k) : Minimize cx suject to Ax = (7) x, (u, v) E and w, where w is the slack variale and = (..., n, ) t with s = k, t = k, and i = for i s, t. We note that the aove prolem is almost the same as the minimum cost flow prolem except for the additional delay constraint. The rest of the paper is concerned with the simplex method ased solution to RELAX-TCSDP(k) and several issues relating to its efficient implementation. We want to emphasize that most of these properties hold only with the transformation and we shall use * to denote those properties that also hold without the transformation. The cost of the optimal solution to RELAX-TCSDP(k) will e a lower ound to the optimal cost of the original CSDP(k) prolem. We will show in the next section how to extract an approximate solution to TCSDP(k) (hence the CSDP(k)) prolem from an optimal solution to the RELAX-TCSDP(k) prolem. V. PROPERTIES OF BASIC SOLUTIONS OF RELAX-TCSDP(k) AND GENERATION OF AN APPROXIMATE SOLUTION TO CSDP(k) Simplex method of linear programming starts with a asic solution and proceeds y constructing one asic solution from another. A asic solution consists of two sets of variales, asic and nonasic. For the RELAX-TCSDP(k) prolem under consideration, all the nonasic variales in a asic solution will e or [22]. Note that the value of the slack variale, when it is nonasic, must e equal to ecause it does not have an upper ound. Given a asic solution, we shall denote the sugraph of G corresponding to the asic variales (except the slack variale if it is in the asic solution) in this solution y G. The sugraph G will e called the sugraph of the asic solution or simply the asis graph. The nonsingular sumatrix of A defined y the asic variales is called a asis matrix or simply, a asis and is denoted as B. The rest of the matrix corresponding to the nonasic variales is called the nonasic matrix. In this section we present certain important properties of the asic solutions of the RELAX-TCSDP(k) prolem. Lemma : * Let G(V, E) e a directed network with at least one cycle W (not necessarily directed). Assigning an aritrary orientation to W, let U = (u, u 2, u 3..., u m ) t, where u j =, for e j W and the orientation of e j agrees with the orientation of W, for e j W and the orientation of e j disagrees with the orientation of W, otherwise. Then, HU = [2]. We shall use U(W ) to denote the vector derived from cycle W as in the aove lemma. We shall denote y d(w ) the signed algeraic sum of the delays of the links on a cycle W as we traverse around the cycle.

6 6 flow < < a) Tree asic solution s Branching node Merging node t Fig.. Structure of asic solutions < < ) Basic solution with a cycle Lemma 2: * Every asis matrix contains the last row of A. Proof: This follows from the fact that rank(h)= n < n. Lemma 3: * The sugraph G of a asic solution contains at most one cycle (See Fig. ). Lemma 4: * If there is a cycle W in a asic solution, then d(w ). Lemma 5: If there is no cycle in a asic solution, then (u, v) E, x uv = or. If there is a cycle W in a asic solution, then (u, v) W, < x uv < and (u, v) E W, x uv = or. Proof: Let B = (, 2..., n ), A N, x B and x N denote the asis matrix, nonasic matrix, column vector of asic variales, and column vector of nonasic variales in the asic solution, respectively. Let = A N x N, then we have Bx B =. Since all the components in A N x N and are integers, so are all the components in. By Cramer s rule, we have x i = det(b i )/ det(b), where B i = (..., i,, i+..., n ). We first show that x i is an integer if the corresponding link is not on the cycle. We consider two cases: Case : There is no cycle in the asic solution. Thus the slack variale is a asic variale. Also G is a spanning tree. Let the nth column in the asis B correspond to the slack variale. Then ( n = (...,, ) ) t and so B has the following form. H B = D, where H is the incidence matrix of G and D is the corresponding delay vector. Since H is the incidence matrix of a spanning tree it follows that det(h ) =. So det(b) =. Also, det(b i ) is an integer ecause all the components of B i are integers. So x i is also an integer for all i. Case 2: There is a cycle W in the asic solution. That is, the slack variale is not in the asis. Let l = W, i.e., l is the numer of links in W. Fig. 2. Branching and merging nodes In this case, we first show that the flow on any link i not on the cycle is an integer. Without loss of generality, let HW H B = HW H D W D and B i = i D W D i, where the columns of H W = (h, h 2..., h l ) correspond to the links on the cycle W and the components of D W are the delays of these links. Note that H i contains the column vector. Let H W = (h 2..., h l ) and D W = (d 2..., d l ). Defining the direction of the link h as the orientation of the cycle W we get y Lemma that H W U(W ) =. Using elementary column operations, det(b) and det(b i ) can e written as: ( det(b) = det ) H W H D W U(W ) D W D = ( ) n+ det(h W H )(D W U(W )), and ( ) H det(b i ) = det W H i D W U(W ) D W D i = ( ) n+ det(h W H i)(d W U(W )). Hence x i = det((h W H i ))/ det((h W H )). Since all the components in matrix (H W H i ) are integer, det((h W H i )) is also integer. The denominator is equal to ± ecause (H W H ) is the incidence matrix of the spanning tree otained y removing link i from G. So it is follows that x i is an integer. Hence x uv = or ecause x uv. We next show that if the asis graph contains a cycle, then the flow on each link on the cycle W is less than and greater than. Assuming the contrary we estalish a contradiction. First recall that the flow on each link that is not in G (that is, each nonasic variale) is either or. If the flow on any link on W is an integer ( or ) then it follows from the flow alance constraints that all the flows on the links on W will e integers. But this would mean that in the current asic solution the total delay of all the links is an even integer. This violates the requirement that the total delay must e equal to which is odd. Definition : (a) On a directed cycle, a node is called a ranching (resp. merging) node if it is the tail (resp. head) of two links on the cycle (See Fig. 2). A segment of the cycle is the set of all the links on the cycle etween two consecutive ranching and merging nodes. A segment consists of consecutive links with the same direction and the direction of a segment is defined as the direction of its links.

7 7 () For a sugraph G s of G, let d(g s ) = (u,v) G s d uv and d x (G s ) = (u,v) G s x uv d uv with respect to the flow vector x. Lemma 6: Suppose the asis graph G contains a cycle W. Let e = (u, v) W and x uv = ( < < ). Define the direction of e as the orientation of W. Then for any link e = (i, j), x ij = if the direction of e agrees with the orientation of W and x ij = otherwise (See Fig. 2). Proof: This follows from the flow alance constraints and the fact the nonasic variales corresponding to links have value or. Theorem 4: Given an optimal solution x to RELAX- TCSDP(k) with G as the corresponding asis graph. (a) If G contains no cycle then G contains a set of k link disjoint s-t paths P k with d(p k ) <, where is the delay constraint in the TCSDP(k) prolem. These paths constitute an optimal solution to the original CSDP(k) prolem. () Suppose that G contains a cycle W and let G (V, E ) e otained from G(V, E) such that V = V and E = {(u, v) E x uv > }. Then G contains a set of k link disjoint s-t paths P k with d(p k ) <, and a set of k link disjoint s-t paths Q k with d(q k ) >, such that c(q k ) OP T c(p ) c(p k ), where OP T is the optimal ojective value of the RELAX-TCSDP(k) prolem, P is the optimal integer solution to the TCSDP(k) prolem (equivalently, the optimal solution to the CSDP(k)) prolem). Also c(p k ) c(p ) + ( c(qk ) c(p k ) ) and d(qk ) + ( d(p k ) ), where is as defined in Lemma 6 with the orientation of the cycle W selected so that d(w ) <. Proof: (a) If there is no cycle in the optimal asis graph G, then all the link flows will e integers and the flow vector can e decomposed into unit flows along k link-disjoint s-t paths. The total delay of these paths will e even and hence less than (ecause is odd). These integral flows form an optimal integer solution to the RELAX-TCSDP(k) prolem and hence an optimal solution to the original TCSDP(k) prolem. By Theorem 3 this is also an optimal solution to the original CSDP(k) prolem. () Assume that the asis graph contains a cycle W. By Lemma 5, the flows on links on W are nonzero and thus G contains W. Oviously, OP T c(p ). Also note that d x (G ) = ecause the slack variale is nonasic and thus its value in the current asic solution is. Define the orientation of W such that d(w ) <. Now push flow along the orientation of W until some link s flow reaches or (See Fig. 3-()). By Lemma 6, all the resultant link flows will e either or. Remove all the links with zero flow from G and let G z denote the resultant network. Evidently, the flows on all links in G z are and d(g z ) < d x (G ) = (ecause d(w ) <, the network delay is reduced when we push the flow along the orientation of W ). It can also e seen that c(w ) for otherwise, the cost of the new flow will e less than the cost of the flow defined y G. Notice that the aove operation does not change the amount of flow from s to t. Since the total flow from s to t in G z is k and all the flows on all links in G z are, there must e k link disjoint s-t paths P k and d(p k ) = d(g z ) <. s s s W a) The optimal asic solution ) Push flow along the direction of the arrow to otain P k. W P consists of the two solid links on W. P k W P consists of the remaining solid links. c) Push flow along the direction of the arrow to otain Q k. W Q consists of the two solid links on W. Q k W Q consists of the remaining solid links Fig. 3. Illustrations for the proof of Theorem 4 Similarly, we can otain Q k (See Fig. 3-(c)). Here, the flow is pushed along W in the reverse direction. Along this direction, d(w ) > and so d(q k ) >. Since c(w ) < along this direction, c(q k ) OP T c(p ) c(p k ). In the rest of the proof, we use P k, Q k and W to denote the corresponding set of links. Let W P = P k W and W Q = Q k W, i.e., W P (resp. W Q ) is the set of links on oth the cycle W and P k (resp. Q k ). Evidently, W P W Q = and P k W P = Q k W Q (See Fig. 3. () and (c)). Then we have c(p ) OP T = c(p k W P ) + c(w P ) + ( )c(w Q ) = c(p k W P ) + ( )c(q k W Q ) + c(w P ) + ( )c(w Q ) = (c(p k W P ) + c(w P )) + ( )(c(q k W Q ) + c(w Q )) = c(p k ) + ( )c(q k ). The second equality holds ecause P k W P = Q k W Q. t t t

8 8 Because c(p ) c(p k ), c(p k ) c(p ) c(p ) ( )c(q k ) c(p ) = c(q k ) c(p ) = + ( c(qk ) c(p k ) ). c(q k ) c(p k ) Similarly, we can show that d(qk ) + ( d(p k ) ). Notice that all the individual paths in the aove theorem can e otained using flow decomposition [9]. VI. REVISED SIMPLEX METHOD FOR THE RELAX-CSDP(k) PROBLEM In this section, we first riefly present the different steps in the revised simplex method of linear programming. A detailed description of this method may e found in Chapter 8 of [22] (Note that in [22] the revised simplex method is presented for a maximization prolem). We then derive formulas required to identify the entering and the leaving variales that are needed to generate a new asic solution from a given asic solution. A. Revised Simplex Method Consider the following linear programming prolem. Minimize cx suject to Ax =, l x u. For the RELAX-CSDP(k) prolem A is an n (m + ) matrix with rank(a) = n, x = (x..., x m+ ) t, c = (c, c 2..., c m+ ), = (, 2..., n ) t. Each feasile asic solution x of this linear program is partitioned into two sets, one set consisting of the n asic variales and the other set consisting of the remaining m+ n non-asic variales. This partition induces a partition of A into B and A N, a partition of x into x B and x N and a partition of c into c B and c N, corresponding to the set of asic variales and the set of nonasic variales, respectively. The matrix B is the asis matrix and is nonsingular. See Sections IV and V for the form of the asis matrix and properties of asic solutions for the RELAX- TCSDP(k) prolem. Revised Simplex Method [22] ) Solve the system Y B = c B, where Y = (y, y 2... y n ). 2) Choose an entering variale x j. This may e any nonasic variale x j such that, with a standing for the corresponding column of A, we have either Y a > c j, x j < u j or Y a < c j, x j > l j. If there is no such variale then stop; the current solution x is optimal. 3) Solve the system BV = a, where V = (v, v 2..., v n ) t. 4) Define x j (t) = x j + t and x B(t) = x B tv in case Y a < c j and x j (t) = x j t, x B(t) = x B + tv in case Y a > c j. If the constraints l j x j (t) u j, l B x B (t) u B are satisfied for all positive t then the prolem is unounded. Otherwise set t as the largest value allowed y these constraints. If the upper ound imposed on t y the constraints l B x B (t) u B is stricter than the upper ound imposed y l j x j (t) u j, then determine the leaving variale. This may e any asic variale x i such that the upper ound imposed on t y l i x i (t) u i alone is as strict as the upper ound imposed y all the constraints l B x B (t) u B. 5) Replace x j y x j(t) and x B y x B(t). If the value of the entering variale x j has just switched from one of its ounds to the other, then proceed directly to Step 2 of the next iteration. Otherwise, replace the leaving variale x i y the entering variale x j in the asis heading, and replace the leaving column of B y column a. Steps 2-5 in the revised simplex method that generate a new asic solution from a given asic solution are called a pivot. In the following we solve the systems of equations in Steps and 3 and derive explicit formulas for Y and V. B. Solving the System Y B = c B Let Y = (y..., y n, γ). Here y..., y n, γ are called potentials (or dual variales) and Y is called the potential vector. Each y i, i =, 2..., n is the potential associated with node i (or row i) and γ is the potential associated with the last row (delay constraint row) of A. The potential of node n is not in Y, and may e set to zero as we will see elow. Now consider Y B = c B. (8) This system of equations has n equations in n variales. We get the following from (8). For each link e k = (i, j) in G, (y..., y n, γ)a k = c ij. That is, y i y j γd ij = c ij, for i n and j n, y i γd in = c in, for j = n, (9) y j γd nj = c nj, for i = n. The last two equations in (9) can e otained y setting the potential y n of node n to zero in the first equation, namely, y i y j γd ij = c ij. Thus in all the computations that follow we set y n =. Definition 2: ) For link e k = (i, j), c(e k, γ) = γd ij + c ij is called the active cost of link (i, j). 2) r(i, j) = y j y i + γd ij + c ij is called the reduced cost of link (i, j). 3) The reduced cost of the slack variale w is given y r(w) = γ. (Note: Since the column corresponding to the slack variale w is (,..., ) t, we can get the reduced cost of w from the equation in 2) y setting y i = y j =, c ij =, and d ij =. 4) The reduced cost of a path p is defined as r(p) = (i,j) p + r(i, j) (i,j) p r(i, j). Recall that p + and p denote the sets of forward and ackward links on p, respectively. 5) Node n is called the root node. It can e seen from (9) that for any link (i, j) in G r(i, j) = y j y i + γd ij + c ij =. (2)

9 9 From (2) we also have that for any path p from i to j and any cycle W in G r(p) = y j y i + γd(p) + c(p) =, and r(w ) = γd(w ) + c(w ) =. (2) Lemma 7: * If G contains a cycle W, then γ = c(w ) d(w ) ; otherwise, γ =. Once we have computed the value of γ, the other potentials y i s can e calculated using (2) and selecting the path in G from node n (whose potential is ) to node i. We summarize these steps as follows: ) Set the potential of node n to zero. 2) Compute γ as in Lemma 7. 3) For each node i, let p i e a path in G from node n to node i. If there are two paths in G due to the cycle, we can derive the same results no matter which one is selected. 4) Set y i = (u,v) p + i c(e uv, γ) + (u,v) p c(e uv, γ), i where p + i and p i are the sets of forward and ackward links on p i, respectively, as we traverse the path from node n to node i. C. Solving the System BV = a k We now show how to solve the system of equations BV = a k, where a k is the column of A corresponding to the entering variale. In the following an entering link is called an in-arc and a leaving link is called an out-arc. We consider three cases: (a) Basis graph G contains only n links, i.e., there is no cycle in G and the slack variale w is a asic variale, and some link e k = (i, j) is the entering variale. () The asic variales are associated with n links and the entering variale is e k = (i, j). (c) The asic variales are associated with n links and the entering variale is w. Results in all the three cases are summarized in the following theorem. Theorem 5: * a) If G contains no cycle and the entering variale is an in-arc e k = (i, j), then the vector V = (v..., v n ) t defined elow is the desired solution to BV = a k, where W is the new cycle formed y adding the in-arc e k and the orientation of W is chosen to e the same as the direction of e k. v i =, for i < n and the link corresponding to the ith column of B is on W and its orientation agrees with the cycle orientation, for i < n and the link corresponding to the ith column of B is on W and its orientation disagrees with the cycle orientation d(w ), for i = n, otherwise ) If G contains a cycle W and the entering variale is a link e k = (i, j), then V = V p + d(w ) d(w ) V, is the solution to BV = a k, where d(w ) and d(w ) are the delays of cycles W and W, respectively and V p and V are defined y the cycles W and W, respectively (See Lemma ). c) If G contains a cycle W and the entering variale is the slack variale w, then V = V d(w ) is the solution to BV = a k, where V is defined y cycle W (See Lemma ). Proof: Case a): G contains only n links, i.e., there is no cycle in G and the slack variale w is a asic variale, and the link e k = (i, j) is the entering variale. In this case, Hn,n B =, D,n where H n,n is associated with the (n ) links in G and n nodes, and D,n is the vector of (n ) components (corresponding to the asic variales except for w) of the last row of matrix A. Let W denote the new cycle formed y adding the in-arc e k = (i, j) and let the orientation of W e chosen to e the same as the orientation of the in-arc. By Lemma, it is easy to verify that the vector V = (v..., v n ) t defined as in the theorem solves the system BV = a k. Case ): The asic variales are associated with n links and the entering variale is e k = (i, j). In this case, Hn,n B =, D,n where H n,n is associated with the n links and n nodes, and D,n is the vector of the n components of the last row of A corresponding to these n links, and hk a k =. d ij We need to solve the system of equations Hn,n hk V =. (22) D,n d ij First, let us consider H n,n V = h k. (23) Because there are n links in G, there is exactly one cycle, denoted y W. Therefore according to Lemma, V, H n,n V =. (24) After adding link e k = (i, j), we get a new cycle W and let us choose the orientation of this cycle to e the same as that of e k. Then y Lemma, V V = p V, (H n,n, h k ) p =. (25) So, H n,n ( V p) = h k. Because rank(h n,n ) = n, V p + uv, u R is the solution space of (23). We can compute u as follows. D,n ( V p + u V ) = d ij. (26) Since D,n V = d(w ) and D,n ( V p) + d ij = d(w ), we get from (26) d(w ) u d(w ) = and hence u = d(w )/d(w ). Therefore we have proven that V = V p + d(w ) d(w ) V is the desired solution to BV = a k.

10 Case c): The asic variales are associated with n links and the entering variale is the slack variale w. Following the arguments in Case (), we can show that V = V d(w ) is the solution to BV = a k. Here V is defined y the cycle W in G. A. Initialization VII. INITIALIZATION AND PIVOT RULES We first compute k minimum delay link-disjoint s-t paths using Suuralle s algorithm [6]. There is no feasile solution if the total delay of these paths is greater than. Assume that this is not the case. A tree T (not necessarily a spanning tree) rooted at t can e constructed from these paths y removing links incident with s to reak cycles. Note that in T every path from a node in T to t is a directed path. Such a tree is called a directed tree rooted at node t [2]. We next otain a directed spanning tree rooted at t and having T as a sutree. We proceed as follows. First condense or coalesce all the nodes in T into a single node P. Then for the resulting network determine a directed spanning tree rooted at P with all links orientated away from P. Such a tree exists ecause of our assumption that there is a directed path from node s to each node in the network and similarly there is a directed path from each node to node t. The links of the directed tree selected as aove and the links in T together constitute a directed spanning tree T. Assigning flow of to all the links on the disjoint paths and flow of to all other links, we otain a asic solution represented y T. Definition 3: [9] Given a feasile asic solution sugraph G, we say that the link (u, v) G is oriented toward (resp. away from) the root if any of the paths in G from the root to u (resp. v) passes through v (resp. u). A feasile asic solution G with corresponding flow vector x is said to e strongly feasile if every link (u, v) of G with x uv = (resp. x uv = ) is oriented away from (resp. toward) the root. It can e easily verified that the initial spanning tree T selected as aove is strongly feasile. B. Pivot Rules and an Anti Cycling Strategy For an efficient implementation of the revised simplex method, we want to avoid directed cycles in asic solutions. This can e achieved y the following pivot rule: P: Slack variale w assumes the highest priority in choosing the entering variale (Step 2 of the Revised Simplex Method). Lemma 8: The slack variale w is eligile to enter the asis iff γ <. Proof: Since the reduced cost of w is equal to γ, w is eligile to enter the asis iff γ <. Lemma 9: Suppose the Pivot rule P is followed. If a directed cycle W is created in G during a pivot, then in the next pivot the slack variale w will enter the asis and a link on W will leave the asis. Proof: Since W is a directed cycle, c(w ) and γ = c(w )/d(w ) <. It follows that in the pivot after the directed cycle is created, w will enter the asis and the new asis graph will e a spanning tree. A asic solution in which one or more asic variales assume zero values is called degenerate [22]. Simplex pivots that do not alter the asic solution are called degenerate. Furthermore, a asic solution generated at one pivot and reappearing at another will lead to cycling (or infinite looping and non-convergence). Thus we need a strategy to avoid cycling. There are several anticylcing strategies for general linear programming prolems. Cunningham developed a strategy specifically designed for the network simplex method used for solving minimum cost flow prolems. Since RELAX- TCSDP(k) has almost the same structure as the minimum cost flow prolem except for the presence of one additional constraint imposed y the delay requirement, we examine if Cunningham s strategy can e adopted for RELAX-TCSDP(k). We show next that the transformation introduced on the CSDP(k) prolem in Section IV indeed makes Cunningham s strategy suitale for avoiding cycling in the case of RELAX- TCSDP(k). Lemma : For any degenerate pivot, the out-arc is not on the cycle of the current G. Proof: A degenerate pivot does not alter the asic solution. This means that each variale has the same value in the current asic solution as well as in the asic solution that results from the degenerate pivot. Consider now the flows on the links on a cycle. By Lemma 5 these flows are not or. So if a link on a cycle were to leave the asis during a degenerate pivot, then after the pivot it would ecome nonasic with flow of value or. But that would contradict the fact that the current pivot is degenerate. If the out-arc is not on the cycle in the current G, then the potentials can e updated easily as descried next (Chapter 5..2 of [23]). Let T e the current G and e = (u, v) and e = (u, v ) e the out-arc and the in-arc, respectively. Let T = T e + e e the sugraph of the new asic variales. If e is not on the cycle in the current G, then the new potential vector Y associated with T can e otained as follows [23]. { y u yu + r = u v, for u T u y u, for u T v (27) where r u v = c(e u v, γ) + y v y u and T u (resp. T v ) is the component of T e containing u (resp. v ). The convergence part of the following theorem closely follows the proof of Theorem 9. in [22]. Theorem 6: If the sugraphs G s of feasile asic solutions generated y the simplex method are strongly feasile then the simplex method does not cycle and its computational time complexity is pseudo-polynomial. Proof: First oserve that in any sequence of degenerate pivots, the value of every variale, in particular, the value of the slack variale will remain the same. Also if the slack variale is a asic variale then its value is nonzero; otherwise its value is zero. So during a given sequence of degenerate pivots, the slack variale will remain asic or nonasic during the entire sequence of degenerate pivots. So the leaving and entering variales can only e the links in the network.

11 Let G e a feasile asic solution sugraph and t e the root. We define two values for G. C(G ) = (y t y u ). c uv x uv and W (G ) = u V Consider two consecutive asic solutions G i with Gi+ = G i + e f, where e and f are the in-arc and out-arc, respectively. We first show that either C(G i+ ) < C(G i ) or W (Gi+ ) < W (G i ). Indeed if the pivot that generates G i+ from G i is nondegenerate, then C(G i+ ) < C(G i ). If it is degenerate, we have C(G i+ ) = C(G i ). In this case we need to show that W (G i+ ) < W (G i ). Note that the in-arc e = (u, v) still has flow equal to or in G i+. By Lemma, f is not a link on the cycle in G i. So the value of γ does not change. Because Gi+ is strongly feasile, in G i+, link e must e oriented toward the root node t if x e = and oriented away from t if x e =, which implies that node t elongs to G i (v) (the component of G i f containing v) if x e = and node t elongs to G i (u) if x e =. The potentials with respect to G i+ can e calculated using (27). Then we have W (G i+ ) = W (G i ) G (u) r uv < W (G i ), where r uv = c(e uv, γ)+y v y u > if x e = or W (G i+ ) = W (G i )+ G (u) r uv < W (G i ), where r uv = c(e uv, γ)+y v y u < if x e =. If the simplex method cycles, then for some i < j, G i = G j. This means C(Gi ) = C(Gi+ ) = C(G j ). But then W (G i ) > W (Gi+ ) >... > W (G j ) = W (Gi ) contradicting that W (G i ) = W (Gj ). Thus we have proved that the simplex method when applied on RELAX-TCSDP(k) does not cycle if all the asic feasile solutions are strongly feasile. We next estalish the pseudo-polynomial time complexity of this method. We have W (G i ) W (G i+ ) = G (u) r uv r uv, r uv = c(e uv, γ) + y v y u = c uv + γd uv + y v y u. We proceed to show that < y u y v γd uv c uv = γd(w ) + c(w ) { c(w = ), γ =, c(w )d(w ) d(w )c(w ) / d(w ), γ. (28) Since e uv is an in-arc, y u y v γd uv c uv =. To estalish the equalities on the right hand side of (28) suppose that the new cycle W in G is e e 2... e k where e = e uv. Since all the links on W except e uv are in G, the reduced costs on all these links are. So we get from (2) that r uv = y u y v γd uv c uv = γd(w ) + c(w ). Recalling that γ = c(w )/d(w ) if there exists a cycle W in the asic solution or γ = if no such cycle exists, we get the equalities on the right side of (28). Since r uv =, we get from (28) that r uv /d(w ) /(nd max ), where d max is the maximum link delay. Also, the inequality elow follows from the fact that the potential of a node is the sum of the active costs of the links on the path from that node to node t (See Section VI-B). W (G i ) = u V (y t y u ) n 2 (c max + γd max ), where c max is the maximum link cost. If γ, then y Lemma 7, γ = c(w ) d(w ) nc max. Hence W (G i ) n2 (c max + nc max d max ). So the length of the sequence of degenerate pivots is ounded y a polynomial function of c max, d max, and n. Similarly, we can prove that the total numer of non-degenerate pivots is also a polynomial function of m, n, c max, and d max. Pseudo polynomial complexity of the revised simplex method when applied on RELAX-TCSDP(k) follows since each pivot takes O(m) steps [22]. C. Leaving Variale Now, we investigate how to find a leaving variale (out-arc) using Theorem 5. As efore, let the cycle created y adding the in-arc e denoted y W with its orientation defined as that of the in-arc. We note that the reduced cost of the in-arc may e positive or negative. In the following we consider only the latter case. The former case can e treated in a similar way. Case : Slack variale w is in the asic solution (the current G is a spanning tree and so w > ). This corresponds to Theorem 5(a). According to Step 4 of the revised simplex method, we need to consider only the entries of V that are ± or d(w ) if d(w ). These entries correspond to the links of W of the current G or the slack variale w. The maximum value of t is constrained y x B tv, and the corresponding constraining variales (links or w) are eligile to leave the asis. If certain links are eligile to leave the asis then we select the one which keeps the new asic solution strongly feasile (to e discussed next). In this case w will continue to e in the asis. If w is eligile to leave the asis, we select it to leave the asis. In that case the new asis graph G will have a cycle. The flow values ( or ) on the links on the cycle can e determined y the equation d x (G ) = ecause the slack variale w is nonasic and has zero value. Case 2: The asic solution consists of n links, i.e., there is a cycle W in G. The slack variale w is eligile to enter the asis if γ <. Then according to pivot rule P, we let w enter the asis and shall select one of the links on W to leave the asis. The choice can e made according to the case (c) in Theorem 5. If γ, an entering link will create a new cycle W when added to the current G. We need to consider different sucases that capture all possiilities (See Fig. 4). For each one of these sucases we can select the leaving variale using Theorem 5 () and Step 4 in the revised simplex method. Now, we need to consider how to preserve the strong feasiility of the asic solutions. We define the join of a cycle in G as the node on the cycle that is closest to the node t in terms of hops. Without loss of generality, assume that the current asic solution is strongly feasile and consists of n

12 2 Case 2.: in-arc W' Case 2.2: in-arc Case 2.3: W W' in-arc W Fig. 4. Basic solution structures in case 2 W W' links and that the leaving variale is a link f (other cases are trivial). Let G e = G + e e the network otained y adding the in-arc e to G. Evidently, f is on some cycle C {W, W } in G e. If C = W, the orientation of C is chosen to agree (resp. disagree) with the orientation of e if x e = (resp. x e = ) in the current flow. If C = W, the orientation of C is defined such that d(w )/d(w ) < (resp. d(w )/d(w ) > ) if x e = (resp. x e = ), where the orientation of W agrees with the direction of e. Starting from the join of C and traversing along the orientation of C, we choose the first link whose flow is and whose direction agrees with the orientation of C or whose flow is and whose direction disagrees with the orientation of C. This guarantees the strong feasiility of the resulting tree. VIII. SIMULATION We denote our algorithm as DISJOINT-NBS (NBS: Network Based Simplex method) and compare its performance with CPLEX and G-LARAC(k). We use three classes of network topologies: regular graphs H k,n (see Chapter 8, [2]), power-law out-degree graphs [24] and Waxman s random graphs [25]. For a graph G(V, E), the nodes are laeled as, 2..., n = V. Nodes n/2 and n are chosen as the source and target nodes. The link costs and delays are randomly independently generated even integers in the range from to 2. The delay ound is.2 k the delay of the minimum delay s-t paths in G. For regular graphs, k = 4. For random graphs and power-law graphs, k = 2. The results are shown in Fig. 5. In these figures, we use NBS to denote the DISJOINT- NBS algorithm and NBS-REGULAR to denote the running time of DISJOINT-NBS algorithm on regular graphs. Other laels can e interpreted in a similar manner. Experiments show that DISJOINT-NBS algorithm is faster than CPLEX and G-LARAC(k) on all the topologies. For the power-law outdegree graph and Waxman s random graph, the hop numer of feasile s-t paths is usually very small even when the graph is very large. So the running times of DISJOINT-NBS, G- LARAC(k), and CPLEX are close (ut DISJOINT-NBS is still faster) for random graphs and power-law out-degree graphs. Our simulation results in Tales I-III show that the delay of each path derived as in Theorem 4 deviates from the individual delay ound y a small fraction. Note that in these tales the second column specifies the delay ound on each path. IX. SUMMARY In this paper we studied the CSDP(k) prolem which is NPhard. So our goal has een to design an efficient algorithm for constructing an approximate solution to this prolem. Towards this end, we studied the LP relaxation of CSDP(k) prolem using the revised simplex method of linear programming. This relaxed prolem is an upper ounded LP prolem. We have discussed several issues relating to an efficient implementation of our approach. We have shown that an approximate solution to the CSDP(k) prolem can e extracted from an optimal solution to the relaxed prolem. We have derived ounds on the quality of this solution with respect to the optimal solution. Our work can e considered as the study of the CSDP(k) prolem from a primal perspective in contrast to the dual perspective employed in the G-LARAC(k) algorithm which is ased on the algorithms in [] and [6]. Simulation results demonstrate that our algorithm is slightly faster than oth the G-LARAC(k) algorithm and the commercial quality CPLEX package in the case of random graphs and power-law out-degree graphs. On the other hand, for regular graphs our algorithm is much faster. The GCSDP(k) prolem defined in Section II requires that the delay of each individual path satisfies a specified ound, in contrast to the CSDP(k) prolem where the constraint is on the total delay of all the k link-disjoint paths. We have shown in Theorem 2 that the LP relaxations of the two prolems have the same optimal ojective value. Thus, if one is interested in otaining the optimal ojective values of RELAX- GCSDP(k) and RELAX-CSDP(k) prolems, then starting with the RELAX-CSDP(k) does not result in any loss of generality. However, the paths produced y the approximate solution derived from the optimal solution to RELAX-CSDP(k) may not satisfy the individual path delay requirements of the GCSDP(k) prolem. Fortunately, our simulation results in Tale I-III indicate that in most cases the individual delays of the paths produced starting from RELAX-CSDP(k) do not deviate in a significant way from the individual delay requirements of the GCSDP(k) prolem. If one were interested in studying the GCSDP(k) prolem then the issue of finding feasile solutions to this prolem will arise. The algorithm in the present paper may e used as a suroutine in a ranch and ound scheme to find feasile solutions to the GCSDP(k) prolem One direction of further research is to develop approximation schemes for the CSDP(k) prolem along the lines of the approximation algorithms given in [5] for the CSDP(2)

13 3 Runn ing time(s) NBS-REGULAR NBS-RANDOM NBS-POWER CPLEX-REGULAR CPLEX-RANDOM CPLEX-POWER Running time(s) NBS-REGULAR NBS-RANDOM NBS-POWER LARAC-REGULAR LARAC-RANDOM LARAC-POWER Graph nodes Graph nodes Fig. 5. Comparison of running times of DISJOINT-NBS algorithm and CPLEX and G-LARAC(k). The experiments were carried out on IBM Regatta p69 with AIX 5. OS and Power4. GHz CPU. TABLE I PATHS OBTAINED FROM THE OPTIMAL SOLUTION TO RELAX-TCSDP(2) APPLIED ON RANDOM GRAPHS Graph Size(#Nodes) Delay Bound Path-(Cost, Delay) Path-2(Cost, Delay) 87 (24, 56) (536, 82) 2 6 (548, 56) (344, 64) 3 49 (496, 368) (328, 428) TABLE II PATHS OBTAINED FROM THE OPTIMAL SOLUTION TO RELAX-TCSDP(2) APPLIED ON POWER-LAW GRAPHS Graph Size(#Nodes) Delay Bound Path-(Cost, Delay) Path-2(Cost, Delay) 9 (426, ) (372, 72) 2 34 (352, 82) (9, 72) 3 26 (38, 26) (254, 38) TABLE III PATHS OBTAINED FROM THE OPTIMAL SOLUTION TO RELAX-TCSDP(4) APPLIED ON REGULAR GRAPHS Graph Size(#Nodes) Delay Bound Path- Path-2 Path-3 Path (228, 686) (226, 686) (254, 764) (872, 782) (392, 42) (468, 424) (44, 454) (498, 46) (692, 244) ((626, 24) (5862, 2242) (572, 2) prolem. Since the link-disjoint shortest paths prolem is a special case of the minimum cost flow prolem, it will e interesting to investigate if the ideas developed in this paper could e used to design efficient algorithms for the constrained minimum cost flow prolem. REFERENCES [] M. Garey and D. Johnson, Computers and Intractaility: A Guide to the Theory of NP-Completeness. San Francisco, USA: Freeman, 979. [2] Z. Wang and J. Crowcroft, Quality-of-service routing for supporting multimedia applications, IEEE Journal on Selected Areas in Communications, vol. 4, no. 7, pp , 996. [3] R. Hassin, Approximation schemes for the restricted shortest path prolem, Math. of Oper. Res., vol. 7, no., pp , 992. [4] C. A. Phillips, The network inhiition prolem, in STOC, 993, pp [5] D. Lorenz and D. Raz, A simple efficient approximation scheme for the restricted shortest paths prolem, Oper. Res. Letters, vol. 28, pp , 2. [6] G. Luo, K. Huang, J. Wang, C. Hos, and E. Munter, Multi-qos constraints ased routing for ip and atm networks, in in Proc. IEEE Workshop on QoS Support for Real-Time Internet Applications, 999. [7] R. Ravindran, K.Thulasiraman, A. Das, K. Huang, G. Luo, and G. Xue, Quality of services routing: heuristics and approximation schemes with a comparative evaluation, in ISCAS, 22, pp [8] G. Handler and I. Zang, A dual algorithm for the constrained shortest path prolem, Networks, vol., pp , 98.

14 4 [9] K. Mehlhorn and M. Ziegelmann, Resource constrained shortest paths, in ESA, 2, pp [] A. Jüttner, B. Szviatovszki, I. Mécs, and Z. Rajkó, Lagrange relaxation ased method for the QoS routing prolem, in INFOCOM, 2, pp [] B. Blokh and G. Gutin, An approximation algorithm for cominatorial optimization prolems with two parameters, Australasian Journal of Cominatorics, vol. 4, pp , 996. [2] Y. Xiao, K. Thulasiraman, and G. Xue, Equivalence, unification and generality of two approaches to the constrained shortest path prolem with extension, in Allerton Conference on Control, Communication and Computing, University of Illinois, 23, pp [3] A. Jüttner, On resource constrained optimization prolems, 23, in review. [4] Y. Xiao, K. Thulasiraman, and G. Xue, The primal simplex approach to the QoS routing prolem, in QSHINE, 24, pp [5] A. Orda and A. Sprintson, Efficient algorithms for computing disjoint QoS paths, in INFOCOM, 24, pp [6] J. W. Suuralle, Disjoint paths in a network, Networks, vol. 4, no. 2, pp , 974. [7] C.-L. Li, T. S. McCormick, and D. Simich-Levi, The complexity of finding two disjoint paths with min-max ojective function, Discrete Appl. Math., vol. 26, no., pp. 5 5, 99. [8] S. Boyd and L. Vandenerghe, Convex Optimization. Camridge, UK: Camridge University Press, 23. [9] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin, Networks Flows. NJ, USA: Prentice-Hall, 993. [2] R. P. Bertsekas and J. N. Tsitsiklis, Introduction to linear optimization. Belmont, Massachusetts, USA: Athena Scientific, 997. [2] K. Thulasiraman and M. N. Swamy, Graphs: Theory and algorithms. New York: Wiley Interscience, 992. [22] V. Chvatalá, A greedy heuristic for the set covering prolem, Mathematics of Operations Research, vol. 4, no. 3, pp , 979. [23] D. P. Bertsekas, Network optimization: continuous and discrete models. Belmont, Massachusetts, USA: Athena Scientific, 998. [24] C. R. Palmer and J. G. Steffan, Generating network topologies that oey power laws, in IEEE GLOBECOM, 2, pp [25] B. M. Waxman, Routing of multipoint connections, IEEE Journal on Selected Areas in Commun., vol. 6, no. 9, pp , Dec Krishnaiyan Thulasiraman received the Bachelor s degree (963) and Master s degree (965) in electrical engineering from the university of Madras, India, and the Ph.D degree (968) in electrical engineering from IIT, Madras, India. He holds the Hitachi Chair and is Professor in the School of Computer Science at the University of Oklahoma, Norman, where he has een since 994. Prior to joining the University of Oklahoma, Thulasiraman was professor (98-994) and chair ( ) of the ECE Department in Concordia University, Montreal. He was on the faculty in the EE and CS departments of the IITM during Dr. Thulasiraman s research interests have een in graph theory, cominatorial optimization, algorithms and applications in a variety of areas in CS and EE: electrical networks, VLSI physical design, systems level testing, communication protocol testing, parallel/distriuted computing, telecommunication network planning, fault tolerance in optical networks, interconnection networks etc. He has pulished more than papers in archival journals, coauthored with M. N. S. Swamy two text ooks Graphs, Networks, and Algorithms (98) and Graphs: Theory and Algorithms (992), oth pulished y Wiley Inter-Science, and authored two chapters in the Handook of Circuits and Filters (CRC and IEEE, 995) and a chapter on Graphs and Vector Spaces for the handook of Graph Theory and Applications (CRC Press,23). Dr. Thulasiraman has received several awards and honors: Endowed Gopalakrishnan Chair Professorship in CS at IIT, Madras (Summer 25), Elected memer of the European Academy of Sciences (22), IEEE CAS Society Golden Juilee Medal (999), Fellow of the IEEE (99) and Senior Research Fellowship of the Japan Society for Promotion of Science (988). He has held visiting positions at the Tokyo Institute of Technology, University of Karlsruhe, University of Illinois at Urana-Champaign and Chuo University, Tokyo. Dr. Thulasiraman has een Vice President (Administration) of the IEEE CAS Society (998, 999), Technical Program Chair of ISCAS (993, 999), Deputy Editor-in-Chief of the IEEE Transactions on Circuits and Systems I ( 24-25), Co-Guest Editor of a special issue on Computational Graph Theory: Algorithms and Applications (IEEE Transactions on CAS, March 988),, Associate Editor of the IEEE Transactions on CAS (989-9, 999-2), and Founding Regional Editor of the Journal of Circuits, Systems, and Computers. Recently, he founded the Technical Committee on Graph theory and Computing of the IEEE CAS Society. Ying Xiao received the B.S. degree in computer science from the Nanjing University of Information Science and Technology (the former Nanjing Institute of Meteorology), Nanjing, China, in 998 and M.S. degree in computer application from the Southwest Jiaotong University, Chengdu, China, in 2. He is currently working towards the Ph.D degree at the University of Oklahoma in computer science. His research interests include graph theory, cominatorial optimization, distriuted systems and networks, statistical inference, and machine learning. Guoliang Xue is a Professor in the Department of Computer Science and Engineering at Arizona State University. He received the Ph.D degree in Computer Science from the University of Minnesota in 99 and has held previous positions at the Army High Performance Computing Research Center and the University of Vermont. His research interests include efficient algorithms for optimization prolems in networking, with applications to fault tolerance, roustness, and privacy issues in networks ranging from WDM optical networks to wireless ad hoc and sensor networks. He has pulished over papers in this area. His research has een continuously supported y federal agencies including NSF, ARO and DOE. He is the recipient of an NSF Research Initiation Award in 994, and an NSF-ITR Award in 23. He is an Associate Editor of the IEEE Transactions on Circuits and Systems-I, an Associate Editor of the Computer Networks Journal, and an Associate Editor of the IEEE Network Magazine. He has served on the executive/program committees of many IEEE conferences, including Infocom, Secon, Icc, Gloecom and QShine. He is a TPC co-chair of IEEE Gloecom 26 Symposium on Wireless Ad Hoc and Sensor Networks.

GEN-LARAC: A Generalized Approach to the Constrained Shortest Path Problem under Multiple Additive Constraints

GEN-LARAC: A Generalized Approach to the Constrained Shortest Path Problem under Multiple Additive Constraints GEN-LARAC: A Generalized Approach to the Constrained Shortest Path Problem under Multiple Additive Constraints Ying Xiao 1, Krishnaiyan Thulasiraman 1, and Guoliang Xue 2 1 School of Computer Science University

More information

Week 4. (1) 0 f ij u ij.

Week 4. (1) 0 f ij u ij. Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T Our market gardener eample had the form: min - 900 1500 [ ] suject to: Ñ Ò Ó 1.5 2 â 20 60 á ã Ñ Ò Ó 3 â 60 á ã where: [ acres caages acres tomatoes T ]. We need a more systematic approach to solving these

More information

Tibor ILLèS. Katalin MèSZêROS

Tibor ILLèS. Katalin MèSZêROS Yugoslav Journal of Operations Research 11 (2001), Numer 1 15-30 A NEW AND CONSRUCIVE PROOF OF WO ASIC RESULS OF LINEAR PROGRAMMING * ior ILLèS Department of Operations Research E tv s LorÃnd University

More information

Lecture 6 January 15, 2014

Lecture 6 January 15, 2014 Advanced Graph Algorithms Jan-Apr 2014 Lecture 6 January 15, 2014 Lecturer: Saket Sourah Scrie: Prafullkumar P Tale 1 Overview In the last lecture we defined simple tree decomposition and stated that for

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

THE BALANCED DECOMPOSITION NUMBER AND VERTEX CONNECTIVITY

THE BALANCED DECOMPOSITION NUMBER AND VERTEX CONNECTIVITY THE BALANCED DECOMPOSITION NUMBER AND VERTEX CONNECTIVITY SHINYA FUJITA AND HENRY LIU Astract The alanced decomposition numer f(g) of a graph G was introduced y Fujita and Nakamigawa [Discr Appl Math,

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

SVETLANA KATOK AND ILIE UGARCOVICI (Communicated by Jens Marklof)

SVETLANA KATOK AND ILIE UGARCOVICI (Communicated by Jens Marklof) JOURNAL OF MODERN DYNAMICS VOLUME 4, NO. 4, 010, 637 691 doi: 10.3934/jmd.010.4.637 STRUCTURE OF ATTRACTORS FOR (a, )-CONTINUED FRACTION TRANSFORMATIONS SVETLANA KATOK AND ILIE UGARCOVICI (Communicated

More information

(includes both Phases I & II)

(includes both Phases I & II) (includes both Phases I & II) Dennis ricker Dept of Mechanical & Industrial Engineering The University of Iowa Revised Simplex Method 09/23/04 page 1 of 22 Minimize z=3x + 5x + 4x + 7x + 5x + 4x subject

More information

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model Luis Manuel Santana Gallego 100 Appendix 3 Clock Skew Model Xiaohong Jiang and Susumu Horiguchi [JIA-01] 1. Introduction The evolution of VLSI chips toward larger die sizes and faster clock speeds makes

More information

(includes both Phases I & II)

(includes both Phases I & II) Minimize z=3x 5x 4x 7x 5x 4x subject to 2x x2 x4 3x6 0 x 3x3 x4 3x5 2x6 2 4x2 2x3 3x4 x5 5 and x 0 j, 6 2 3 4 5 6 j ecause of the lack of a slack variable in each constraint, we must use Phase I to find

More information

Chapter 7 Network Flow Problems, I

Chapter 7 Network Flow Problems, I Chapter 7 Network Flow Problems, I Network flow problems are the most frequently solved linear programming problems. They include as special cases, the assignment, transportation, maximum flow, and shortest

More information

Determinants of generalized binary band matrices

Determinants of generalized binary band matrices Determinants of generalized inary and matrices Dmitry Efimov arxiv:17005655v1 [mathra] 18 Fe 017 Department of Mathematics, Komi Science Centre UrD RAS, Syktyvkar, Russia Astract Under inary matrices we

More information

Optimal Routing in Chord

Optimal Routing in Chord Optimal Routing in Chord Prasanna Ganesan Gurmeet Singh Manku Astract We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected

More information

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

arxiv: v1 [cs.fl] 24 Nov 2017

arxiv: v1 [cs.fl] 24 Nov 2017 (Biased) Majority Rule Cellular Automata Bernd Gärtner and Ahad N. Zehmakan Department of Computer Science, ETH Zurich arxiv:1711.1090v1 [cs.fl] 4 Nov 017 Astract Consider a graph G = (V, E) and a random

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Transportation Problem

Transportation Problem Transportation Problem Alireza Ghaffari-Hadigheh Azarbaijan Shahid Madani University (ASMU) hadigheha@azaruniv.edu Spring 2017 Alireza Ghaffari-Hadigheh (ASMU) Transportation Problem Spring 2017 1 / 34

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai Divide-and-Conquer Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, 33.4. CSE 6331 Algorithms Steve Lai Divide and Conquer Given an instance x of a prolem, the method works as follows: divide-and-conquer

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

The Incomplete Perfect Phylogeny Haplotype Problem

The Incomplete Perfect Phylogeny Haplotype Problem The Incomplete Perfect Phylogeny Haplotype Prolem Gad Kimmel 1 and Ron Shamir 1 School of Computer Science, Tel-Aviv University, Tel-Aviv 69978, Israel. Email: kgad@tau.ac.il, rshamir@tau.ac.il. Astract.

More information

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables Minimizing a convex separale exponential function suect to linear equality constraint and ounded variales Stefan M. Stefanov Department of Mathematics Neofit Rilski South-Western University 2700 Blagoevgrad

More information

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column

More information

Vector Spaces. EXAMPLE: Let R n be the set of all n 1 matrices. x 1 x 2. x n

Vector Spaces. EXAMPLE: Let R n be the set of all n 1 matrices. x 1 x 2. x n Vector Spaces DEFINITION: A vector space is a nonempty set V of ojects, called vectors, on which are defined two operations, called addition and multiplication y scalars (real numers), suject to the following

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

Representation theory of SU(2), density operators, purification Michael Walter, University of Amsterdam

Representation theory of SU(2), density operators, purification Michael Walter, University of Amsterdam Symmetry and Quantum Information Feruary 6, 018 Representation theory of S(), density operators, purification Lecture 7 Michael Walter, niversity of Amsterdam Last week, we learned the asic concepts of

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

Chaos and Dynamical Systems

Chaos and Dynamical Systems Chaos and Dynamical Systems y Megan Richards Astract: In this paper, we will discuss the notion of chaos. We will start y introducing certain mathematical concepts needed in the understanding of chaos,

More information

Column Generation. i = 1,, 255;

Column Generation. i = 1,, 255; Column Generation The idea of the column generation can be motivated by the trim-loss problem: We receive an order to cut 50 pieces of.5-meter (pipe) segments, 250 pieces of 2-meter segments, and 200 pieces

More information

QoS Routing Under Multiple Additive Constraints: A Generalization of the LARAC Algorithm

QoS Routing Under Multiple Additive Constraints: A Generalization of the LARAC Algorithm Received 30 November 2014; revised 9 February 2015; accepted 27 April 2015. Date of Publication 5 May 2015; date of current version 8 June 2016. Digital Object Identifier 10.1109/TETC.2015.2428654 QoS

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous

More information

A Strongly Polynomial Simplex Method for Totally Unimodular LP

A Strongly Polynomial Simplex Method for Totally Unimodular LP A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method

More information

Decomposition-based Methods for Large-scale Discrete Optimization p.1

Decomposition-based Methods for Large-scale Discrete Optimization p.1 Decomposition-based Methods for Large-scale Discrete Optimization Matthew V Galati Ted K Ralphs Department of Industrial and Systems Engineering Lehigh University, Bethlehem, PA, USA Départment de Mathématiques

More information

Hermite normal form: Computation and applications

Hermite normal form: Computation and applications Integer Points in Polyhedra Gennady Shmonin Hermite normal form: Computation and applications February 24, 2009 1 Uniqueness of Hermite normal form In the last lecture, we showed that if B is a rational

More information

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017)

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017) GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017) C. Croitoru croitoru@info.uaic.ro FII November 12, 2017 1 / 33 OUTLINE Matchings Analytical Formulation of the Maximum Matching Problem Perfect Matchings

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Genetic Algorithms applied to Problems of Forbidden Configurations

Genetic Algorithms applied to Problems of Forbidden Configurations Genetic Algorithms applied to Prolems of Foridden Configurations R.P. Anstee Miguel Raggi Department of Mathematics University of British Columia Vancouver, B.C. Canada V6T Z2 anstee@math.uc.ca mraggi@gmail.com

More information

Minimum cost transportation problem

Minimum cost transportation problem Minimum cost transportation problem Complements of Operations Research Giovanni Righini Università degli Studi di Milano Definitions The minimum cost transportation problem is a special case of the minimum

More information

Mathematical Programs Linear Program (LP)

Mathematical Programs Linear Program (LP) Mathematical Programs Linear Program (LP) Integer Program (IP) Can be efficiently solved e.g., by Ellipsoid Method Cannot be efficiently solved Cannot be efficiently solved assuming P NP Combinatorial

More information

Upper Bounds for Stern s Diatomic Sequence and Related Sequences

Upper Bounds for Stern s Diatomic Sequence and Related Sequences Upper Bounds for Stern s Diatomic Sequence and Related Sequences Colin Defant Department of Mathematics University of Florida, U.S.A. cdefant@ufl.edu Sumitted: Jun 18, 01; Accepted: Oct, 016; Pulished:

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

5 Flows and cuts in digraphs

5 Flows and cuts in digraphs 5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 Usual rules. :) Exercises 1. Lots of Flows. Suppose you wanted to find an approximate solution to the following

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Long non-crossing configurations in the plane

Long non-crossing configurations in the plane Long non-crossing configurations in the plane Adrian Dumitrescu Csaa D. Tóth July 4, 00 Astract We revisit some maximization prolems for geometric networks design under the non-crossing constraint, first

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Homework Assignment 4 Solutions

Homework Assignment 4 Solutions MTAT.03.86: Advanced Methods in Algorithms Homework Assignment 4 Solutions University of Tartu 1 Probabilistic algorithm Let S = {x 1, x,, x n } be a set of binary variables of size n 1, x i {0, 1}. Consider

More information

FRACTIONAL PACKING OF T-JOINS. 1. Introduction

FRACTIONAL PACKING OF T-JOINS. 1. Introduction FRACTIONAL PACKING OF T-JOINS FRANCISCO BARAHONA Abstract Given a graph with nonnegative capacities on its edges, it is well known that the capacity of a minimum T -cut is equal to the value of a maximum

More information

Contents. Introduction

Contents. Introduction Introduction Contents Chapter 1. Network Flows 1 1.1. Graphs 1 1.2. Shortest Paths 8 1.3. Maximum Flow 17 1.4. Min Cost Flows 22 List of frequently used Symbols 31 Bibliography 33 Index 37 iii i Introduction

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Topics in Graph Theory

Topics in Graph Theory Topics in Graph Theory September 4, 2018 1 Preliminaries A graph is a system G = (V, E) consisting of a set V of vertices and a set E (disjoint from V ) of edges, together with an incidence function End

More information

Critical Reading of Optimization Methods for Logical Inference [1]

Critical Reading of Optimization Methods for Logical Inference [1] Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA Gestion de la production Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA 1 Contents 1 Integer Linear Programming 3 1.1 Definitions and notations......................................

More information

Notes on Dantzig-Wolfe decomposition and column generation

Notes on Dantzig-Wolfe decomposition and column generation Notes on Dantzig-Wolfe decomposition and column generation Mette Gamst November 11, 2010 1 Introduction This note introduces an exact solution method for mathematical programming problems. The method is

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

ALL-OPTICAL networks transmit information on a very

ALL-OPTICAL networks transmit information on a very IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 24, NO. 8, AUGUST 2006 45 An Improved Algorithm for Optimal Lightpath Establishment on a Tree Topology Guoliang Xue, Senior Member, IEEE, Weiyi Zhang,

More information