Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm

Size: px
Start display at page:

Download "Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm"

Transcription

1 Smoothed Aalysis of the Miimum-Mea Cycle Cacelig Algorithm ad the Network Simplex Algorithm Kamiel Corelisse 1, Bodo Mathey 1 1 Uiversity of Twete, Departmet of Applied Mathematics k.corelisse@utwete.l, b.mathey@utwete.l arxiv: v1 [cs.ds] 3 Apr 215 The miimum-cost flow (MCF) problem is a fudametal optimizatio problem with may applicatios ad seems to be well uderstood. Over the last half cetury may algorithms have bee developed to solve the MCF problem ad these algorithms have varyig worst-case bouds o their ruig time. However, these worst-case bouds are ot always a good idicatio of the algorithms performace i practice. The Network Simplex (NS) algorithm eeds a expoetial umber of iteratios for some istaces, but it is cosidered the best algorithm i practice ad performs best i experimetal studies. O the other had, the Miimum-Mea Cycle Cacelig (MMCC) algorithm is strogly polyomial, but performs badly i experimetal studies. To explai these differeces i performace i practice we apply the framework of smoothed aalysis. We show a upper boud of O(m 2 log() log()) for the umber of iteratios of the MMCC algorithm. Here is the umber of odes, m is the umber of edges, ad is a parameter limitig the degree to which the edge costs are perturbed. We also show a lower boud of Ω(m log()) for the umber of iteratios of the MMCC algorithm, which ca be stregtheed to Ω(m) whe = Θ( 2 ). For the umber of iteratios of the NS algorithm we show a smoothed lower boud of Ω(m mi{, } ). 1 Itroductio The miimum-cost flow (MCF) problem is a well-studied problem with may applicatios, for example, modelig trasportatio ad commuicatio etworks [1, 7]. Over the last half cetury may algorithms have bee developed to solve it. The first algorithms proposed i the 196s were all pseudo-polyomial. These iclude the Out-of-Kilter algorithm by Mity [17] ad by Fulkerso [8], the Cycle Cacelig algorithm by Klei [13], the Network Simplex (NS) algorithm by Datzig [5], ad the Successive Shortest Path (SSP) algorithm by Jewell [11], Iri [1], ad Busacker ad Gowe [4]. I 1972 Edmods ad Karp [6] proposed the Capacity Scalig algorithm, which was the first polyomial MCF algorithm. I the 198s the first strogly polyomial algorithms were developed by Tardos [24] ad by Orli [18]. Later, several more strogly polyomial algorithms were proposed such as the Miimum-Mea Cycle 1

2 Cacelig (MMCC) algorithm by Goldberg ad Tarja [9] ad the Ehaced Capacity Scalig algorithm by Orli [19], which curretly has the best worst-case ruig time. For a more complete overview of the history of MCF algorithms we refer to Ahuja et al. [1]. Whe we compare the performace of several MCF algorithms i theory ad i practice, we see that the algorithms that have good worst-case bouds o their ruig time are ot always the oes that perform best i practice. Zadeh [25] showed that there exist istaces for which the Network Simplex (NS) algorithm has expoetial ruig time, while the Miimum-Mea Cycle Cacelig (MMCC) algorithm rus i strogly polyomial time, as show by Goldberg ad Tarja [9]. I practice however, the relative performace of these algorithms is completely differet. Kovács [15] showed i a experimetal study that the NS algorithm is much faster tha the MMCC algorithm o practical istaces. I fact, the NS algorithm is eve the fastest MCF algorithm of all. A explaatio for the fact that the NS algorithm performs much better i practice tha idicated by its worst-case ruig time is that the istaces for which it eeds expoetial time are very cotrived ad ulikely to occur i practice. To better uderstad the differeces betwee worst-case ad practical performace for the NS algorithm ad the MMCC algorithm, we aalyze these algorithms i the framework of smoothed aalysis. Smoothed aalysis was itroduced by Spielma ad Teg [22] to explai why the simplex algorithm usually eeds oly a polyomial umber of iteratios i practice, while i the worst case it eeds a expoetial umber of iteratios. I the framework of smoothed aalysis, a adversary ca specify ay istace ad this istace is the slightly perturbed before it is used as iput for the algorithm. This perturbatio ca model, for example, measuremet errors or umerical imprecisio. I additio, it ca model oise o the iput that ca ot be quatified exactly, but for which there is o reaso to assume that it is adversarial. Algorithms that have a good smoothed ruig time ofte perform well i practice. We refer to two surveys [16,23] for a summary of results that have bee obtaied usig smoothed aalysis. We cosider a slightly more geeral model of smoothed aalysis, itroduced by Beier ad Vöckig [2]. I this model the adversary ca ot oly specify the mea of the oisy parameter, but also the type of oise. We use the followig smoothed iput model for the MCF problem. A adversary ca specify the structure of the flow etwork icludig all odes ad edges, ad also the exact edge capacities ad budgets of the odes. However, the adversary ca ot specify the edge costs exactly. For each edge e the adversary ca specify a probability desity g e : [, 1] [, ] accordig to which the cost of e is draw at radom. The parameter determies the maximum desity of the desity fuctio ad ca therefore be iterpreted as the power of the adversary. If is large, the adversary ca very accurately specify each edge cost ad we approach worst-case aalysis. If = 1, the adversary has o choice but to specify the uiform desity o the iterval [, 1] ad we have average-case aalysis. Brusch et al. [3] were the first to show smoothed bouds o the ruig time of a MCF algorithm. They showed that the SSP algorithm eeds O(m) iteratios i expectatio ad has smoothed ruig time O(m(m + log())), sice each iteratio cosists of fidig a shortest path. They also provide a lower boud of Ω(m mi{, } ) for the umber of iteratios that the SSP algorithm eeds, which is tight for = Ω(). These bouds show that the SSP algorithm eeds oly a polyomial umber of iteratios i the smoothed settig, i cotrast to the expoetial umber it eeds i the worst case, ad explais why the SSP algorithm performs quite well i practice. I order to fairly compare the SSP algorithm with other MCF algorithms i the smoothed settig, we eed smoothed bouds o the ruig times of these other algorithms. Brusch et al. [3] asked particularly for smoothed ruig time bouds for the MMCC algorithm, sice the MMCC algorithm has a much better worst-case ruig 2

3 time tha the SSP algorithm, but performs worse i practice. It is also iterestig to have smoothed bouds for the NS algorithm, sice the NS algorithm is the fastest MCF algorithm i practice. However, util ow o smoothed bouds were kow for other MCF algorithms. I this paper we provide smoothed lower ad upper bouds for the MMCC algorithm, ad a smoothed lower boud for the NS algorithm. For the MMCC algorithm we provide a upper boud (Sectio 2) for the expected umber of iteratios that the MMCC algorithm eeds of O(m 2 log() log()). For dese graphs, this is a improvemet over the Θ(m 2 ) iteratios that the MMCC algorithm eeds i the worst case, if we cosider a costat (which is reasoable if it models, for example, umerical imprecisio or measuremet errors). We also provide a lower boud (Sectio 3.1) o the umber of iteratios that the MMCC algorithm eeds. For every, every m {, + 1,..., 2 }, ad every 2, we provide a istace with Θ() odes ad Θ(m) edges for which the MMCC algorithm requires Ω(m log()) iteratios. For = Ω( 2 ) we ca improve our lower boud (Sectio 3.2). We show that for every 4 ad every m {, + 1,..., 2 }, there exists a istace with Θ() odes ad Θ(m) edges, ad = Θ( 2 ), for which the MMCC algorithm requires Ω(m) iteratios. This is ideed a stroger lower boud tha the boud for geeral, sice we have m log() = Θ(m log()) for = Θ( 2 ). For the NS algorithm we provide a lower boud (Sectio 4) o the umber of o-degeerate iteratios that it requires. I particular, we show that for every, every m {,..., 2 }, ad every 2 there exists a flow etwork with Θ() odes ad Θ(m) edges, ad a iitial spaig tree structure for which the NS algorithm eeds Ω(m mi{, } ) o-degeerate iteratios with probability 1. The existece of a upper boud is our mai ope problem. Note that our boud is the same as the lower boud that Brusch et al. [3] foud for the smoothed umber of iteratios of the SSP algorithm. This is o coicidece, sice we use essetially the same istace (with some mior chages) to show our lower boud. We show that with the proper choice of the iitial spaig tree structure for the NS algorithm, we ca esure that the NS algorithm performs the same flow augmetatios as the SSP algorithm ad therefore eeds the same umber of iteratios (plus some degeerate oes). I the rest of our itroductio we itroduce the MCF problem, the MMCC algorithm ad the NS algorithm i more detail. I the rest of our paper, all logarithms are base Miimum-Cost Flow Problem A flow etwork is a simple directed graph G = (V, E) together with a oegative capacity fuctio u : E R + defied o the edges. For coveiece, we assume that G is coected ad that E does ot cotai a pair (u, v) ad (v, u) of reverse edges. For the MCF problem, we also have a cost fuctio c : E [, 1] o the edges ad a budget fuctio b : V R o the odes. Nodes with egative budget require a resource, while odes with positive budget offer it. A flow f : E R + is a oegative fuctio o the edges that satisfies the capacity costraits, f(e) u(e) (for all e E), ad flow coservatio costraits b(v) + e=(u,v) E f(e) = e =(v,w) E f(e ) (for all v V ). The cost c(f) of a flow f is defied as the sum of the flow o each edge times the cost of that edge, that is, c(f) = e E c(e) f(e). The objective of the miimum-cost flow problem is to fid a flow of miimum cost or coclude that o feasible flow exists. I our aalysis we ofte use the cocept of a residual etwork, which we defie here. For a edge e = (u, v) we deote the reverse edge (v, u) by e 1. For flow etwork G ad flow 3

4 f, the residual etwork G f is defied as the graph G f = (V, E f E b ). Here E f = { e e E ad f(e) < u(e) } is the set of forward edges with capacity u (e) = u(e) f(e) ad cost c (e) = c(e). E b = { e e 1 E ad f(e 1 ) > } is the set of backward edges with capacity u (e) = f(e 1 ) ad cost c (e) = c(e 1 ). Here u (e) is also called the residual capacity of edge e for flow f. 1.2 Miimum-Mea Cycle Cacelig Algorithm The MMCC algorithm works as follows: First we fid a feasible flow usig ay maximum-flow algorithm. Next, as log as the residual etwork cotais cycles of egative total cost, we fid a cycle of miimum-mea cost ad maximally augmet flow alog this cycle. We stop whe the residual etwork does ot cotai ay cycles of egative total cost. For a more elaborate descriptio of the MMCC algorithm, we refer to Korte ad Vyge [14]. I the followig, we deote the mea cost of a cycle C by µ(c) = ( e C c(e)) / C. Also, for ay flow f, we deote the mea cost of the cycle of miimum-mea cost i the residual etwork G f by µ(f). Goldberg ad Tarja [9] proved i 1989 that the Miimum-Mea-Cycle Cacelig algorithm rus i strogly polyomial time. Five years later Radzik ad Goldberg [21] slightly improved this boud o the ruig time ad showed that it is tight. I the followig we will focus o the umber of iteratios the MMCC algorithm eeds, that is, the umber of cycles that have to be caceled. A boud o the umber of iteratios ca easily be exteded to a boud o the ruig time, by otig that a miimum-mea cycle ca be foud i O(m) time, as show by Karp [12]. The tight boud o the umber of iteratios that the MMCC algorithm eeds is as follows. Theorem 1.1 (Radzik ad Goldberg). The umber of iteratios eeded by the MMCC algorithm is bouded by O(m 2 ) ad this boud is tight. To prove our smoothed bouds i the ext sectios, we use aother result by Korte ad Vyge [14, Corollary 9.9] which states that the absolute value of the mea cost of the cycle that is caceled by the MMCC algorithm, µ(f), decreases by at least a factor 1/2 every m iteratios. Theorem 1.2 (Korte ad Vyge). Every m iteratios of the MMCC algorithm, µ(f) decreases by at least a factor 1/ Network Simplex Algorithm The Network Simplex (NS) algorithm starts with a iitial spaig tree structure (T, L, U) ad associated flow f, where each edge i E is assiged to exactly oe of T, L, ad U, ad it holds that f(e) = for all edges e L, f(e) = u(e) for all edges e U, 4

5 f(e) u(e) for all edges e T, the edges of T form a spaig tree of G (if we cosider the udirected versio of both the edges of T ad the graph G). If the MCF problem has a feasible solutio, such a structure ca always be foud by first fidig ay feasible flow ad the augmetig flow alog cycles cosistig of oly edges that have a positive amout of flow less tha their capacity, util o such cycles remai. Note that the structure (T, L, U) uiquely determies the flow f, sice the edges i T form a tree. I additio to the spaig tree structure, the NS algorithm also keeps track of a set of ode potetials π(v) for all odes v V. The ode potetials are defied such that the potetial of a specified root ode is ad that the potetial for other odes is such that the reduced cost c π (u, v) = c(u, v) π(u) + π(v) of a edge (u, v) equals for all edges (u, v) T. I each iteratio, the NS algorithm tries to improve the curret flow by addig a edge to T that violates its optimality coditio. A edge i L violates its optimality coditio if it has strictly egative reduced cost, while a edge i U violates its optimality coditio if it has strictly positive reduced cost. Oe of the edges e that violates its optimality coditio is added to T, which creates a uique cycle C i T. Flow is maximally augmeted alog C, util the flow o oe of the edges e C becomes or reaches its capacity. The edge e leaves T, after which T is agai a spaig tree of G. Next we update the sets T, L, ad U, the flow ad the ode potetials. This completes the iteratio. If ay edges violatig their optimality coditio remai, aother iteratio is performed. Oe iteratio of the NS algorithm is also called a pivot. The edge e that is added to T is called the eterig edge ad the edge e that leaves T is called the leavig edge. Note that i some cases the eterig edge ca be the same edge as the leavig edge. Also, if oe of the edges i the cycle C already cotais flow equal to its capacity, the flow is ot chaged i that iteratio, but the spaig tree T still chages. Such a iteratio we call degeerate. Note that i each iteratio, there ca be multiple edges violatig their optimality coditio. There are multiple possible pivot rules that determie which edge eters T i this case. I our aalysis we use the (widely used i practice) pivot rule that selects as the eterig edge, from all edges violatig their optimality coditio, the edge for which the absolute value of its reduced cost c π (e) is maximum. I case multiple edges i C are cadidates to be the leavig edge, we choose the oe that is most coveiet for our aalysis. If a strogly feasible spaig tree structure [1] is used, it ca be show that the umber of iteratios that the NS algorithm eeds is fiite. However, Zadeh [25] showed that there exist istaces for which the NS algorithm (with the pivot rule stated above) eeds a expoetial umber of iteratios. Orli [2] developed a strogly polyomial versio of the NS algorithm, which uses cost-scalig. However, this algorithm is rarely used i practice ad we will ot cosider it i the rest of our paper. For a more elaborate discussio of the NS algorithm we refer to Ahuja et al. [1]. 2 Upper Boud for the MMCC Algorithm I this sectio we show a upper boud of O(m 2 log() log()) for the expected umber of iteratios that the MMCC algorithm eeds startig from the iitial residual etwork G f for the feasible startig flow f for flow etwork G = (V, E). Note that we assumed i Sectio 1.1 that G is simple ad that E does ot cotai a pair (u, v) ad (v, u) of reverse edges. This 5

6 implies that for each pair of odes u, v V, there is always at most oe edge from u to v ad at most oe edge from v to u i ay residual etwork G f. We first show that the umber of cycles that appears i at least oe residual etwork G f for a feasible flow f o G, is bouded by ( + 1)!, where = V. Lemma 2.1. The total umber of cycles that appears i ay residual etwork G f for a feasible flow f o G, is bouded by ( + 1)!. Proof. First we show that the umber of directed cycles of legth k (2 k ) is bouded by!. We idetify each cycle C of legth k with a path P of legth k startig ad edig at the same arbitrarily chose ode of C ad the followig the edges of C i the directio of their orietatio. Every such path P ca be idetified with a uique cycle. The umber X of possible paths of legth k is bouded by X ( 1)... ( k + 1)! (1) The first iequality of Equatio (1) follows sice there are at most possible choices for the first ode of the path, at most 1 choices for the secod ode, etc. The lemma follows by observig that the umber of possible legths of cycles i residual etworks G f is bouded by + 1. We ext show that the probability that ay particular cycle has egative mea cost close to ca be bouded. I the rest of this sectio, ε >. Lemma 2.2. The probability that a arbitrary cycle C has mea cost µ(c) [ ε, [ ca be bouded by ε. Proof. We ca oly have µ(c) [ ε, [ if c(c) [ ε, [, sice C cosists of at most edges. We ow draw the costs for all edges i C except for the cost of oe edge e. The cost of cycle C depeds liearly o the cost of edge e, with coefficiet 1 if e is a forward edge i C ad coefficiet 1 if e is a reverse edge i C. Therefore, the width of the iterval from which the cost of e must be draw such that c(c) [ ε, [ is ε. Sice the desity fuctio accordig to which the cost of e is draw has maximum desity, the probability that the cost of e is draw from this iterval is at most ε. Corollary 2.3. The probability that there exists a cycle C with µ(c) [ ε, [ is at most ( + 1)!ε. Proof. The corollary follows directly from Lemma 2.1 ad Lemma 2.2. Lemma 2.4. If oe of the residual etworks G f for feasible flows f o G cotai a cycle C with µ(c) [ ε, [, the the MMCC algorithm eeds at most m log 2 (1/ε) iteratios. Proof. Assume to the cotrary that oe of the residual etworks G f for feasible flows f o G cotai a cycle C with µ(c) [ ε, [, but the MMCC algorithm eeds more tha m log 2 (1/ε) iteratios. Let f deote the startig flow foud usig ay maximum-flow algorithm. Sice all edge costs are draw from the iterval [, 1], we have that µ( f) 1. Accordig to Theorem 1.2, after m log 2 (1/ε) iteratios we have that for the curret flow f holds that µ( f) ε. Now either µ( f) cotradictig the assumptio that the MMCC algorithm eeds more tha m log 2 (1/ε) iteratios, or µ( f) [ ε, [ cotradictig the assumptio that oe of the residual etworks G f for feasible flows f o G cotai a cycle C with µ(c) [ ε, [. 6

7 Theorem 2.5. The expected umber of iteratios that the MMCC algorithm eeds is at most O(m 2 log() log()). Proof. Let T be the expected umber of iteratios that the MMCC algorithm eeds. We have E(T ) = P(T t) t=1 P(ay G f cotais a cycle C with µ(c) [ 2 (t 1)/m, [ ) (2) t=1 m 2 log() log() + m 2 log() log() + t=m 2 log() log() +1 ( + 1)!2 (t 1)/m (3) 2 t/m (4) t= = m 2 log() log() + m = O(m 2 log() log()) t= Here Equatio (2) follows from Lemma 2.4. Equatio (3) follows by boudig the probability for the first m 2 log() log() terms of the summatio by 1 ad the probability for the other terms usig Corollary (2.3). Fially, Equatio (4) follows from the iequality log() > log(( + 2)!), which holds for 6. 2 t 3 Lower Boud for the MMCC Algorithm 3.1 Geeral Lower Boud I this sectio we describe a costructio that, for every, every m {, + 1,..., 2 }, ad every 2, provides a istace with Θ() odes ad Θ(m) edges for which the MMCC algorithm requires Ω(m log()) iteratios. For simplicity we describe the iitial residual etwork G, which occurs after a flow satisfyig all the budgets has bee foud, but before the first miimum-mea cycle has bee caceled. For completeess, we will explai at the ed of the descriptio of G how to choose the iitial etwork, budgets, ad startig flow such that G is the first residual etwork. We ow describe how to costruct G give, m, ad. I the followig, we assume 64. If is smaller tha 64, the lower boud o the umber of iteratios reduces to Ω(m) ad a trivial istace with Θ() odes ad Θ(m) edges will require Ω(m) iteratios. We defie k w = 1 2 (log() 4) ad k x = 1 2 (log() 5). Note that this implies that k x = k w or k x = k w 1. For the edge costs we defie itervals from which the edge costs are draw uiformly at radom. We defie G = (V, E) as follows (see Figure 1). V = {a, b, c, d} U V W X, where U = {u 1,..., u }, V = {v 1,..., v }, W = {w 1,..., w kw }, ad X = {x 1,..., x kx }. 7

8 E = E uv E a E b E c E d E w E x. E uv is a arbitrary subset of U V of cardiality m. Each edge (u i, v j ) has capacity 1 ad cost iterval [, 1/]. E a cotais the edges (a, u i ), E b cotais the edges (u i, b), E c cotais the edges (c, v i ), ad E d cotais the edges (v i, d) (i = 1,..., ). All these edges have ifiite capacity ad cost iterval [, 1/]. E w cotais the edges (d, w i ) ad (w i, a) (i = 1,..., k w ). A edge (d, w i ) has capacity m ad cost iterval [, 1/]. A edge (w i, a) has capacity m ad cost iterval [ 2 2 2i, 2 2 2i + 1/]. E x cotais the edges (b, x i ) ad (x i, c) (i = 1,..., k x ). A edge (b, x i ) has capacity m ad cost iterval [, 1/]. A edge (x i, c) has capacity m ad cost iterval [ 2 1 2i, 2 1 2i + 1/]. Note that all cost itervals have width 1/ ad therefore correspod to valid probability desities for the edge costs, sice the costs are draw uiformly at radom from these itervals. The edges of the types (w i, a) ad (x i, c) have a cost iterval that correspods to egative edge costs. The residual etwork with these egative edge costs ca be obtaied by havig the followig origial istace (before computig a flow satisfyig the budget requiremets): All odes, edges, costs ad capacities are the same as i G, except that istead of the edges of type (w i, a) we have edges (a, w i ) with capacity m ad cost iterval [2 2 2i 1/, 2 2 2i ] ad istead of the edges of type (x i, c) we have edges (c, x i ) with capacity m ad cost iterval [2 1 2i 1/, 2 1 2i ]. I additio, ode a has budget k w m, ode c has budget k x m, the odes of the types w i ad x i have budget m ad all other odes have budget. If we ow choose as the iitial feasible flow the flow that seds m uits from a to each ode of type w i ad from c to each ode of type x i the we obtai the iitial residual etwork G. We ow show that the MMCC algorithm eeds Ω(m log()) iteratios for the iitial residual etwork G. First we make some basic observatios. The miimum-mea cycle C ever cotais the path P j = (d, w j, a) if the path P i = (d, w i, a) has positive residual capacity for some i < j, sice the mea cost of C ca be improved by substitutig P j by P i i C. Aalogously, C ever cotais the path P j = (b, x j, c) if the path P i = (b, x i, c) has positive residual capacity for some i < j. Also, sice all cycles cosidered have mea cost strictly less tha 1/, cycles will ever iclude more edges with cost at least 1/ tha ecessary. I additio, sice the edges of type (w i, a) ad (x i, c) are saturated i the order cheapest to most expesive, oe of these edges will ever be icluded i reverse directio i the miimum-mea cycle. The above observatios lead to three cadidate types for the miimum-mea cycle: cycles of type (d, w i, a, u, v, d), of type (b, x i, c, v, u, b), ad of type (d, w i, a, u, b, x j, c, v, d). Here u ad v are arbitrary odes i U ad V, respectively. I the followig series of lemmas we compare the mea costs of these cycle types. Here u ad v are agai arbitrary odes i U ad V, possibly differet for the cycles that are compared. I our computatios we always assume worst-case realizatio of the edge costs, that is, if we wat to show that a cycle C 1 has lower mea cost tha a cycle C 2, we assume that all edges i C 1 take the highest cost i their cost iterval, while all edges i C 2 take the lowest cost i their cost iterval (a edge that appears i both C 1 ad C 2 ca eve take its highest cost i C 1 ad its lowest cost i C 2 i the aalysis). 8

9 Lemma 3.1. The cycle C 1 = (d, w i, a, u, v, d) has lower mea cost tha the cycle C 2 = (b, x i, c, v, u, b). Proof. Sice the cycles have equal legth, we ca compare their total costs istead of their mea costs. We have w(c 1 ) w(c 2 ) ( 2 2 2i + 5/ ) ( 2 1 2i 1/ ) 64/ + 6/ < Here the secod iequality holds sice i k x 1 2 (log() 5). Lemma 3.2. The cycle C 1 = (b, x i, c, v, u, b) has lower mea cost tha the cycle C 2 = (d, w i+1, a, u, v, d). Proof. Sice the cycles have equal legth, we ca compare their total costs istead of their mea costs. We have w(c 1 ) w(c 2 ) ( 2 1 2i + 4/ ) ( 2 2 2(i+1)) 64/ + 4/ < Here the secod iequality holds sice i + 1 k w 1 2 (log() 4). Lemma 3.3. The cycle C 1 = (d, w i, a, u, v, d) has lower mea cost tha the cycle C 2 = (d, w i, a, u, b, x i, c, v, d). Proof. We have w(c 1 )/ C 1 w(c 2 )/ C 2 ( 2 2 2i + 5 ) /5 ( 2 2 2i 2 1 2i) / < Here the secod iequality holds sice i k x 1 2 (log() 5). Lemma 3.4. The cycle C 1 = (b, x i, c, v, u, b) has lower mea cost tha the cycle C 2 = (b, x i, c, v, d, w i+1, a, u, b). Proof. We have w(c 1 )/ C 1 w(c 2 )/ C 2 ( 2 1 2i + 4 ) ( / i 2 2 2(i+1)) / < Here the secod iequality holds sice i + 1 k w 1 2 (log() 4). The above observatios ad lemmas allow us to determie the umber of iteratios that the MMCC algorithm eeds for residual etwork G. Theorem 3.5. The MMCC algorithm eeds m(k w + k x ) iteratios for residual etwork G, idepedet of the realizatio of the edge costs. 9

10 Proof. For the first iteratio oly cycles of the types (d, w i, a, u, v, d) ad (d, w i, a, u, b, x i, c, v, d) are available. Accordig to Lemma 3.3, all cycles of type (d, w i, a, u, v, d) have lower mea costs tha all cycles of type (d, w i, a, u, b, x i, c, v, d) ad therefore the first iteratio will augmet alog a cycle of type (d, w i, a, u, v, d). After the first iteratio, a edge from V to U will become available, ad therefore a cycle of the type (b, x i, c, v, u, b). Accordig to Lemma 3.1 this cycle has higher mea cost tha cycles of the type (d, w i, a, u, v, d) however, ad therefore the first m iteratios will be of the type (d, w i, a, u, v, d). After the first m iteratios, the edge (d, w 1 ), the edge (w 1, a), ad all edges i E uv will be saturated. The available cycle types are ow (b, x i, c, v, u, b) ad (b, x i, c, v, d, w i+1, a, u, b). Accordig to Lemma 3.4, all cycles of type (b, x i, c, v, u, b) have lower mea cost tha all cycles of type (b, x i, c, v, d, w i+1, a, u, b). The ext iteratio will therefore augmet alog a cycle of type (b, x i, c, v, u, b). After this iteratio, a edge from U to V becomes available ad therefore a cycle of type (d, w i+1, a, u, v, d), but accordig to Lemma 3.2 all cycles of type (b, x i, c, v, u, b) have lower mea cost tha cycles of type (d, w i+1, a, u, v, d) ad therefore i iteratios m + 1,..., 2m the MMCC algorithm augmets alog cycles of type (b, x i, c, v, u, b). After 2m iteratios, we obtai G, except that ow edges (d, w 1 ), (w 1, a), (b, x 1 ), ad (x 1, c) are saturated ad that there is some flow o the ifiite capacity edges of the types (a, u i ), (u i, b), (c, v i ), ad (v i, d). The MMCC algorithm will keep augmetig amog m cycles of type (d, w i, a, u, v, d) followed by m cycles of type (b, x i, c, v, u, b) util all edges of types (w i, a) ad (x i, c) are saturated ad o egative cost cycles remai. The total umber of iteratios the MMCC algorithm eeds is therefore m(k w + k x ). The istace G ad Theorem 3.5 allow us to state a lower boud o the umber of iteratios that the MMCC algorithm eeds i the smoothed settig. Theorem 3.6. For every, every m {, + 1,..., 2 }, ad every 2, there exists a istace with Θ() odes ad Θ(m) edges for which the MMCC algorithm requires Ω(m log()) iteratios, idepedet of the realizatio of the edge costs. Proof. Follows directly from the istace G, Theorem 3.5 ad the defiitio of k w ad k x. 3.2 Lower Boud for Depedet o I Sectio 3.1 we cosidered the settig where does ot deped o. I this settig we showed that the MMCC algorithm eeds Ω(m log()) iteratios. We ca improve the lower boud if is much larger tha. I this sectio we cosider the case where = Ω( 2 ). I particular, we show that for every 4 ad every m {,..., 2 } there exists a istace with Θ() odes, Θ(m) edges, ad = Θ( 2 ) for which the MMCC algorithm eeds Ω(m) iteratios. The iitial residual etwork H that we use to show our boud is very similar to the iitial residual etwork G that was used to show the boud i Sectio 3.1. Below we describe the differeces (see Figure 2 for a illustratio). We set = 4 2. The costat of 4 is large, but for the sake of readability ad ease of calculatios we did ot try to optimize it. The ode set W ow cosists of odes {w 1,..., w } ad the ode set X ow cosists of odes {x 1,..., x }. 1

11 a b u 1 u 2 u 3 u 4 w 1 w 2 x 1 x 2 v 1 v 2 v 3 v d c Figure 1: The istace G for which the MMCC algorithm eeds O(m log()) iteratios for = 4, m = 9, ad = 64. Next to the edges are the approximate edge costs. Node a is split ito two odes a 1 ad a 2. From ode a 1 to a 2 there is a directed path cosistig of edges, all with ifiite capacity ad cost iterval [, 1/]. Edges (a, u i ) are replaced by edges (a 2, u i ) with ifiite capacity ad cost iterval [, 1/]. Edges (w i, a) are replaced by edges (w i, a 1 ) with capacity m ad cost iterval [ ( 3 1 ]. )2i 2, ( 3 )2i 2 + Node c is split ito two odes c 1 ad c 2. From ode c 1 to c 2 there is a directed path cosistig of edges, all with ifiite capacity ad cost iterval [, 1/]. Edges (c, v i ) are replaced by edges (c 2, v i ) with ifiite capacity ad cost iterval [, 1/]. Edges (x i, c) are replaced by edges (x i, c 1 ) with capacity m ad cost iterval [ ( 3 )2i 1, ( 3 )2i ]. Note that this is a valid choice of cost itervals for the edges (w i, a 1 ) ad (x i, c 1 ) ad that they all have egative costs, sice (x, c 1 ) is the most expesive of them ad we have ( 3 ) ( 1 ) (e 6 ) <. (5) 64 As i Sectio 3.1, there are three cadidate types for the miimum-mea cost cycle: cycles of type (d, w, a, u, v, d), cycles of type (b, x, c, v, u, b), ad cycles of type (d, w, a, u, b, x, c, v, d). Agai we assume worst-case realizatios of the edge costs ad compare the mea costs of cycles of the differet types i a series of lemmas. Lemma 3.7. The cycle C 1 = (d, w i, a, u, v, d) has lower mea cost tha the cycle C 2 = (b, x i, c, v, u, b). 11

12 Proof. Sice the cycles have equal legth, we ca compare their total costs istead of their mea costs. We have ( ( ) ) ( 3 2i 2 w(c 1 ) w(c 2 ) ( ) ) 3 2i 1 1 3e Here the secod iequality holds sice i ad ( ) 3 2 e 12 for 4. < Lemma 3.8. The cycle C 1 = (b, x i, c, v, u, b) has lower mea cost tha the cycle C 2 = (d, w i+1, a, u, v, d). Proof. Sice the cycles have equal legth, we ca compare their total costs istead of their mea costs. We have ( ( ) ) ( 3 2i 1 w(c 1 ) w(c 2 ) ( ) ) 3 2(i+1) 2 3e Here the secod iequality holds sice i + 1 ad ( ) 3 2 e 12 for 4. < Lemma 3.9. The cycle C 1 = (d, w i, a, u, v, d) has lower mea cost tha the cycle C 2 = (d, w i, a, u, b, x i, c, v, d). Proof. We have ( ( 3 ) ) 2i w(c 1 )/ C 1 w(c 2 )/ C 2 ( + 5 e ( + 5)(2 + 8) ( ( 3 ) + 1 < ) 2i 2 ( 3 ) ) 2i Here the secod iequality holds sice i ad ( ) 3 2 e 12 for 4. Lemma 3.1. The cycle C 1 = (b, x i, c, v, u, b) has lower mea cost tha the cycle C 2 = (b, x i, c, v, d, w i+1, a, u, b). Proof. We have ( ( 3 ) ) 2i w(c 1 )/ C 1 w(c 2 )/ C 2 ( + 5 e ( + 5)(2 + 8) ( ( 3 ) 2i 1 ( 3 ) ) 2(i+1) ) ( + 5) < Here the secod iequality holds sice i + 1 ad ( ) 3 2 e 12 for 4. The above lemmas allow us to determie the umber of iteratios that the MMCC algorithm eeds for iitial residual etwork H. 12

13 Theorem The MMCC algorithm eeds 2m iteratios for iitial residual etwork H, idepedet of the realizatio of the edge costs. Proof. The proof is similar to the proof of Theorem 3.5. Sice cycles of type (d, w i, a, u, v, d) have lower mea cost tha both cycles of type (b, x i, c, v, u, b) ad of type (d, w i, a, u, b, x i, c, v, d) accordig to Lemma 3.7 ad Lemma 3.9, the first m iteratios will augmet flow alog cycles of type (d, w i, a, u, v, d). After these m iteratios, edges (d, w 1 ) ad (w 1, a 1 ) are saturated ad the edges i E uv have positive residual capacity oly i the directio from V to U. Sice cycles of type (b, x i, c, v, u, b) have lower mea cost tha both cycles of type (d, w i+1, a, u, v, d) ad cycles of type (b, x i, c, v, d, w i+1, a, u, b) accordig to Lemma 3.8 ad Lemma 3.1, the ext m iteratios will augmet flow alog cycles of type (b, x i, c, v, u, b). After the first 2m iteratios, the residual etwork is the same as H, except that edges (d, w 1 ), (w 1, a 1 ), (b, x 1 ), ad (x 1, c 1 ) are saturated ad there is some flow o several edges of ifiite capacity. The MMCC algorithm will keep augmetig alog m cycles of type (d, w i, a, u, v, d) followed by m cycles of type (b, x i, c, v, u, b), util all edges of type (w i, a 1 ) ad type (x i, c 1 ) are saturated. At this poit o egative cycles remai i the residual etwork ad the MMCC algorithm termiates after 2m iteratios. Iitial residual etwork H ad Theorem 3.11 allow us to state a lower boud for the umber of iteratios that the MMCC Algorithm eeds i the smoothed settig for large. Theorem For every 4 ad every m {, + 1,..., 2 }, there exists a istace with Θ() odes ad Θ(m) edges, ad = Θ( 2 ), for which the MMCC algorithm requires Ω(m) iteratios, idepedet of the realizatio of the edge costs. Proof. Follows directly from the istace H ad Theorem Lower boud for the Network Simplex Algorithm I this sectio we provide a lower boud of Ω(m mi{, } ) for the umber of iteratios that the Network Simplex (NS) algorithm requires i the settig of smoothed aalysis. The istace of the miimum-cost flow problem that we use to show this lower boud is very similar to the istace used by Brusch et al. [3] to show a lower boud o the umber of iteratios that the Successive Shortest Path algorithm eeds i the smoothed settig. The differeces are that they scaled their edge costs by a factor of, which we do ot, that we add a extra path from ode s to ode t, ad that the budgets of the odes are defied slightly differetly. For completeess, we describe the costructio of the (slightly adapted) istace by Brusch et al. below. For a more elaborate descriptio of the istace, we refer to the origial paper. I the followig we assume that all paths from s to t have pairwise differet costs, which holds with probability 1, sice the edge costs are draw from cotiuous probability distributios. For give positive itegers, m {,..., 2 }, ad 2 let k = log 5 = O() ad M = mi{, 2 log 2 /4 2} = Θ(mi{, }). Like Brusch et al. we assume that 64, which implies k, M 1. We costruct a flow etwork with Θ() odes, Θ(m) edges, ad smoothig parameter for which the NS algorithm eeds Θ(m mi{, } ) iteratios. The costructio of the flow etwork G that we use to show our lower boud cosists of three steps. First we defie a flow etwork G 1. Next we describe how to obtai flow etwork G i+1 from flow etwork G i. Fially, we costruct G usig G k. As before, we give a iterval of width at least 1/ for each edge from which its costs are draw uiformly at radom. 13

14 a 1 a 2 b 1 ( 3 )6 u 1 u 2 u 3 u 4 w 1... w 4 x 1... x 4 v 1 v 2 v 3 v 4 ( 3 ) ( 3 )7 d c 2 c 1 Figure 2: The istace G for which the MMCC algorithm eeds Ω(m) iteratios for = 4, m = 9, ad = 4 2. Next to the edges are the approximate edge costs. Costructio of G 1. For the first step, cosider two sets U = {u 1,..., u } ad W = {w 1,..., w } of odes ad a arbitrary set E UW U W cotaiig exactly E UW = m edges. The iitial flow etwork G 1 is defied as G 1 = (V 1, E 1 ) for V 1 = U W {s 1, t 1 } ad E 1 = ({s 1 } U) E UW (W {t 1 }). The edges e from E UW have capacity 1 ad costs from the iterval I e = [ 7, 9 ]. The edges (s 1, u i ), u i U have capacity equal to the out-degree of u i, the edges (w j, t 1 ), w j W have capacity equal to the i-degree of w j ad both have costs from the iterval I e = [, 1 ] (see Figure 3). Costructio of G i+1 from G i. Now we describe the secod step of our costructio. Give a flow etwork G i = (V i, E i ), we defie G i+1 = (V i+1, E i+1 ), where V i+1 = V i {s i+1, t i+1 } ad E i+1 = E i ({s i+1 } {s i, t i }) ({s i, t i } {t i+1 }). Let N i be the value of the maximum s i -t i flow i the graph G i. The ew edges e {(s i+1, s i ), (t i, t i+1 )} have capacity u(e) = N i ad costs from the iterval I e = [, 1 ]. The ew edges e {(s i+1, t i ), (s i, t i+1 )} also have capacity u(e) = N i, but costs from the iterval I e = [ 2i+3 1, 2i+3 +1 ] (see Figure 4). Costructio of G from G k. Let F be the value of a maximum s k -t k flow i G k. We will ow use G k to defie G = (V, E) as follows (see also Figure 5). V := V k A B C D Q {s, t}, with A := {a 1, a 2,..., a M }, B := {b 1, b 2,..., b M }, C := {c 1, c 2,..., c M }, D := {d 1, d 2,..., d M }, ad Q := {q 1, q 2,..., q 2M }. 14

15 G 1 E UW u 1 w s 1 3 u 2 w 2 2 t u 3 w 3 [, 1 ] [ 7, 9 ] [, 1 ] Figure 3: Example for G 1 with = 3 ad m = 7 with capacities differet from 1 show ext to the edges ad the cost itervals show below each edge set. Dashed edges are i the iitial spaig tree for the etwork simplex algorithm, while solid edges are ot. G i+1 t i 2i+3 [, 1 ] s i+1 G i t i+1 [, 1 ] s i 2i+3 Figure 4: G i+1 with G i as sub-graph with edge costs ext to the edges. Dashed edges are i the iitial spaig tree for the etwork simplex algorithm, while solid edges are ot. 15

16 E := E k E a E b E c E d E q. E a cotais the edges (a i, a i 1 ), i {2,..., M}, with cost iterval [ 2k+5 1, 2k+5 ] ad ifiite capacity; (s, a i ), i {1,..., M}, with cost iterval [, 1 ] ad capacity F ; ad (a 1, s k ) with cost iterval [ 2k+4 1 ] ad ifiite capacity., 2k+4 E b cotais the edges (b i, b i 1 ), i {2,..., M}, with cost iterval [ 2k+5 1, 2k+5 ] ad ifiite capacity; (s, b i ), i {1,..., M}, with cost iterval [, 1 ] ad capacity F ; ad (b 1, t k ) with cost iterval [ 2k+5 1, 2k+5 ] ad ifiite capacity. E c cotais the edges (c i 1, c i ), i {2,..., M}, with cost iterval [ 2k+5 1, 2k+5 ] ad ifiite capacity; (c i, t), i {1,..., M}, with cost iterval [, 1 ] ad capacity F ; ad (s k, c 1 ) with cost iterval [ 2k+5 1, 2k+5 ] ad ifiite capacity. E d cotais the edges (d i 1, d i ), i {2,..., M}, with cost iterval [ 2k+5 1, 2k+5 ] ad ifiite capacity; (d i, t), i {1,..., M}, with cost iterval [, 1 ] ad capacity F ; ad (t k, d 1 ) with cost iterval [ 2k+4 1 ] ad ifiite capacity., 2k+4 E q cotais the edges (q i 1, q i ), i {2,..., 2M}, with cost iterval [ 2k+5 1 ] ad ifiite capacity; (s, q 1 ), with cost iterval [ 2k+5 1, 2k+5 with cost iterval [ 2k+5 1 ] ad ifiite capacity., 2k+5, 2k+5 ] ad ifiite capacity; ad (q 2M, t) The budgets of all odes are, except for ode s ad t, which have budgets b(s) = 2MF ad b(t) = 2MF. We choose as the iitial spaig tree T for the NS algorithm the edges (s 1, u i ) (i = 1,..., ) ad (w i, t 1 ) (i = 1,..., ), (s i+1, s i ) (i = 1,..., k 1) ad (t i, t i+1 ) (i = 1,..., k 1), (s, a 1 ), (a i, a i 1 ) (i = 2,..., M), ad (a 1, s k ), (s k, c 1 ) ad (c i 1, c i ) (i = 2,..., M), (b i, b i 1 ) (i = 2,..., M) ad (b 1, t k ), (d 1, t), (t k, d 1 ), ad (d i 1, d i ) (i = 2,..., M), (s, q 1 ), (q i, q i+1 ) (i = 1,..., 2M 1), ad (q 2M, t). (see Figure 3, Figure 4, ad Figure 5). I additio, L = E\T ad Ũ =. Spaig tree structure (T, L, Ũ) correspods to the flow that seds 2MF uits of flow o the path (s, q 1,..., q 2M, t) ad does ot sed ay flow o other edges. To prove our lower boud o the umber of iteratios that the NS algorithm eeds for flow etwork G, we lik the iteratios of the NS algorithm to the iteratios of the Successive Shortest Path (SSP) algorithm for flow etwork H, where H is obtaied from G by removig all odes q i (i = 1,..., 2M) ad all their icidet edges. As show by Brusch et al. [3, Theorem 23], the 16

17 2 k+4 a 5 a 4 a 3 a 2 a 1 s k c 1 c 2 c 3 c 4 c 5 capacity capacity F s G k t capacity F b 5 b 4 b 3 b 2 b 1 t k d 1 d 2 d 3 d 4 d 5 2 k+4 capacity q 1 q 2 q 3 q 4 q 5 q 6 q 7 q 8 q 9 q 1 capacity Figure 5: G with G k as sub-graph with approximate edge costs ext to the edges. Dashed edges are i the iitial spaig tree for the etwork simplex algorithm, while solid edges are ot. SSP algorithm eeds Ω(m mi{, } ) iteratios for H. We show that each o-degeerate iteratio of the NS algorithm o G correspods with a iteratio of the SSP algorithm o H ad that therefore the NS algorithm eeds Ω(m mi{, } ) iteratios for flow etwork G as well. I our aalysis we use may results o the costs of paths i H by Brusch et al. [3]. We will ot prove these results agai, but refer to the origial paper. We ow first observe that path (s, q 1,..., q 2M, t) is the most expesive path from s to t i G. Sice paths from s k to t k have cost less tha 2 k+3 [3, Lemma 21], the most expesive s t paths that do ot use the odes q i are (s, a M,..., c M, t) ad (s, b M,..., d M, t). Both those paths have the same distributio for their costs. If we arbitrarily choose path (s, a M,..., c M, t) ad compare its cost with the cost of path (s, q 1,..., q 2M, t), assumig worst case edge cost realizatios, we have c(s, q 1,..., q 2M, t) c(s, a M,..., c M, t) ( ) ( 1 (2M 1) + 2 k+4 ) + 2 (2M + 1) = 3 2k+4 2M 3 >, by the defiitio of k ad M ad 64, which shows that path (s, q 1,..., q 2M, t) is the most expesive path from s to t i G. Sice (s, q 1,..., q 2M, t) is the most expesive s t path, we ca obtai a egative cycle C by combiig the reverse edges of (s, q 1,..., q 2M, t) with the edges of aother s t path P. Clearly, the cheaper P is, the cheaper is C. I additio, if all edges of C except for oe edge (i, j) are i the curret spaig tree T, the edge (i, j) has reduced cost equal to the cost of C, sice all edges i T have reduced cost. Usig these two observatios, we ca coclude 17

18 that as log as all egative cycles that ca be formed usig edges i the curret spaig tree T plus oe edge outside of T that has positive residual capacity i the curret residual etwork cosist of path (t, q 2M,..., q 1, s) plus a s t path P, the the NS algorithm with the most egative edge pivot rule will pivot o the edge that together with the edges i T forms the cheapest s t path. Brusch et al. [3] showed that the SSP algorithm ecouters 2M F paths o H. Let P 1,..., P 2MF be the paths ecoutered o H by the SSP algorithm, ordered from cheapest to most expesive. We refer the reader to the paper of Brusch et al. for a descriptio of these paths. I the followig we show that i the i th o-degeerate iteratio of the NS algorithm o G, flow is set alog cycle (t, q 2M,..., q 1, s) P i. We will ot provide all the calculatios eeded to compare the reduced cost of the cadidate edges for additio to the spaig tree i each iteratio, but it ca be checked by tedious computatio that the claimed edge is ideed the oe with lowest reduced cost. If flow is set over oe of the edges i E UW, that edge is always a cadidate leavig edge, sice edges i E UW have capacity 1 ad all other capacities are itegral. For simplicity we assume that i cases where multiple edges i the cycle become saturated simultaeously by the flow augmetatio, it is always the edge from E UW that leaves the spaig tree. I the first iteratio of the NS algorithm o G, all edges i E UW have egative reduced cost ad positive residual capacity, ad the edge (u i, w j ) that together with the startig spaig tree T cotais path P 1 is added to T, flow is augmeted alog cycle (t, q 2M,..., q 1, s) P 1, ad (u i, w j ) becomes saturated ad leaves T agai. For the secod iteratio, edge (u i, w j ) is saturated ad therefore the edge that together with T cotais P 2 will be added to T. This will cotiue for the first m iteratios. At this poit, the cheapest s t path usig edges i T plus oe edge with positive residual capacity is the path that is obtaied by usig either edge (s 2, t 1 ) or edge (s 1, t 2 ) (depedig o the realizatio of the edge costs). The ext two iteratios will therefore be degeerate. I oe of these iteratios (s 2, t 1 ) is added to the spaig tree, but edge (t 1, t 2 ) is saturated, prevets ay flow beig set alog the cycle, ad is therefore removed from the spaig tree. I the other iteratio (s 1, t 2 ) is added to the spaig tree ad (s 2, s 1 ) is removed. After these two iteratios the edges i E UW become eligible for augmetig flow i the reverse directio ad the ext m iteratios augmet flow alog the cycles (t, q 2M,..., q 1, s) P m+1,..., (t, q 2M,..., q 1, s) P 2m. Aalogously to the above, every time a edge (s i+1, s i ) gets saturated, two degeerate iteratios take place i which edges (s i, t i+1 ) ad (s i+1, t i ) are added to the spaig tree. This allows flow to be set through G i i reverse directio, that is, from t i to s i. Similarly, every time a edge (t i+1, s i ) (that is, the reverse edge of a origial edge (s i, t i+1 )) i the residual etwork gets saturated, two degeerate iteratios take place i which edges (t i, t i+1 ) ad (s i+1, s i ) are added to the spaig tree. After F iteratios, there are o paths with positive residual capacity from s k to t k. At this poit aother two degeerate iteratios take place. I oe of them edge (c 1, t) is added to the spaig tree, but o flow is set sice (s, a 1 ) has zero residual capacity ad therefore (s, a 1 ) leaves the spaig tree. I the other iteratio (s, b 1 ) is added to the spaig tree ad (d i, t) leaves. Now the edges i E UW ca be added to the spaig tree agai ad flow is augmeted alog them i reverse directio durig the ext m iteratios. I particular, i the ext iteratio flow is augmeted alog cycle (t, q 2M,..., q 1, s) P F +1. Aalogously to the above, every time a edge (s, a i ) gets saturated, two degeerate iteratios take place. I oe of them (c i, t) eters the spaig tree ad (s, a i ) leaves. I the other (s, b i ) 18

19 eters the spaig tree ad (d 1, t) leaves. Also, every time a edge (s, b i ) gets saturated, two degeerate iteratios take place. I oe of them (d i+1, t) eters the spaig tree ad (s, b i ) leaves. I the other (s, a i+1 ) eters the spaig tree ad (c i, t) leaves. Fially, after 2MF iteratios oe of the edges ot i the spaig tree has both egative reduced cost ad positive residual capacity ad therefore the NS algorithm termiates. From the above discussio we ca coclude that the NS algorithm o G eeds 2MF o-degeerate iteratios plus several degeerate oes. Theorem 4.1. For flow etwork G ad iitial spaig tree structure (T, L, Ũ), the NS algorithm eeds 2M F o-degeerate iteratios with probability 1. Proof. Follows immediately from the discussio above. Theorem 4.1 provides a lower boud for the umber of iteratios that the NS algorithm eeds i the smoothed settig. Theorem 4.2. For every, every m {,..., 2 }, ad every 2 there exists a flow etwork with Θ() odes ad Θ(m) edges, ad a iitial spaig tree structure for which the Network Simplex algorithm eeds Ω(m mi{, } ) o-degeerate iteratios with probability 1. Proof. Follows directly from Theorem 4.1 ad the defiitio of M ad F. 5 Discussio I Sectio 4 we showed a smoothed lower boud of Ω(m mi{, } ) for the umber of iteratios that the NS algorithm eeds. This boud is the same as the smoothed lower boud that Brusch et al. [3] showed for the SSP algorithm. For the SSP algorithm this lower boud is eve tight i case = Ω(). Still, the NS algorithm is usually much faster i practice tha the SSP algorithm. We believe that the reaso for this differece is that the time eeded per iteratio is much less for the NS algorithm tha for the SSP algorithm. I practical implemetatios, the eterig edge is usually picked from a small subset (for example of size Θ( m)) of the edges, which removes the ecessity of scaig all edges for the edge which maximally violates its optimality coditios. Also, the spaig tree structure allows for fast updatig of the flow ad ode potetials, i particular whe the flow chages o oly a small fractio of the edges. For the SSP algorithm a iteratio cosists of fidig a shortest path, which takes O(m + log()) time. The experimetal results of Kovács [15] seem to support this claim, sice o all test istaces the SSP algorithm is slower tha the NS algorithm, but ever more tha a factor m. To allow a better compariso of the SSP algorithm ad the NS algorithm i the smoothed settig, it would be useful to have a smoothed upper boud o the ruig time of the NS algorithm. Fidig such a upper boud is our mai ope problem. There is a gap betwee our smoothed lower boud of Ω(m log()) (Sectio 3.1) for the umber of iteratios that the MMCC algorithm requires ad our smoothed upper boud of O(m 2 log() log()). Sice our lower boud for the MMCC algorithm is weaker tha the lower boud for the SSP algorithm, while the MMCC algorithm performs worse o practical istaces tha the SSP algorithm, we believe that our lower boud for the MMCC algorithm ca be stregtheed. Our stroger lower boud of Ω(m) i case = Ω( 2 ) (Sectio 3.2) is aother idicatio that this is likely possible. 19

20 Refereces [1] Ravidra K. Ahuja, Thomas L. Magati, ad James B. Orli. Network Flows: Theory, Algorithms, ad Applicatios. Pretice-Hall, [2] Reé Beier ad Berthold Vöckig. Radom kapsack i expected polyomial time. Joural of Computer ad System Scieces, 69(3):36 329, 24. [3] Tobias Brusch, Kamiel Corelisse, Bodo Mathey, Heiko Rögli, ad Clemes Röser. Smoothed aalysis of the successive shortest path algorithm. Computig Research Repository [cs.ds], arxiv, 215. Prelimiary versio at SODA 213. [4] Robert G. Busacker ad Paul J. Gowe. A procedure for determiig a family of mimiumcost etwork flow patters. Techical Report Techical Paper 15, Operatios Research Office, 196. [5] George B. Datzig. Liear programmig ad extesios. Rad Corporatio Research Study. Priceto Uiv. Press, Priceto, NJ, [6] Jack Edmods ad Richard M. Karp. Theoretical improvemets i algorithmic efficiecy for etwork flow problems. Joural of the ACM, 19(2): , [7] Lester R. Ford, Jr. ad Delbert R. Fulkerso. Flows i Networks. Priceto Uiversity Press, [8] Delbert R. Fulkerso. A out-of-kilter algorithm for miimal cost flow problems. Joural of the SIAM, 9(1):18 27, [9] Adrew V. Goldberg ad Robert E. Tarja. Fidig miimum-cost circulatios by cacelig egative cycles. J. ACM, 36(4): , October [1] Masao Iri. A ew method for solvig trasportatio-etwork problems. Joural of the Operatios Research Society of Japa, 3(1,2):27 87, 196. [11] William S. Jewell. Optimal flow through etworks. Operatios Research, 1(4): , [12] Richard M. Karp. A characterizatio of the miimum cycle mea i a digraph. Discrete Mathematics, 23(3):39 311, [13] Morto Klei. A primal method for miimal cost flows with applicatios to the assigmet ad trasportatio problems. Maagemet Sciece, 14(3):25 22, [14] Berhard Korte ad Jes Vyge. Combiatorial Optimizatio: Theory ad Algorithms. Spriger Publishig Compay, Icorporated, 1st editio, 27. [15] Péter Kovács. Miimum-cost flow algorithms: A experimetal evaluatio. Optimizatio Methods ad Software, 3(1):94 127, 215. [16] Bodo Mathey ad Heiko Rögli. Smoothed aalysis: Aalysis of algorithms beyod worst case. it Iformatio Techology, 53(6):28 286,

Solutions for the Exam 9 January 2012

Solutions for the Exam 9 January 2012 Mastermath ad LNMB Course: Discrete Optimizatio Solutios for the Exam 9 Jauary 2012 Utrecht Uiversity, Educatorium, 15:15 18:15 The examiatio lasts 3 hours. Gradig will be doe before Jauary 23, 2012. Studets

More information

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ. 2 5. Weighted umber of late jobs 5.1. Release dates ad due dates: maximimizig the weight of o-time jobs Oce we add release dates, miimizig the umber of late jobs becomes a sigificatly harder problem. For

More information

Optimization Methods MIT 2.098/6.255/ Final exam

Optimization Methods MIT 2.098/6.255/ Final exam Optimizatio Methods MIT 2.098/6.255/15.093 Fial exam Date Give: December 19th, 2006 P1. [30 pts] Classify the followig statemets as true or false. All aswers must be well-justified, either through a short

More information

IP Reference guide for integer programming formulations.

IP Reference guide for integer programming formulations. IP Referece guide for iteger programmig formulatios. by James B. Orli for 15.053 ad 15.058 This documet is iteded as a compact (or relatively compact) guide to the formulatio of iteger programs. For more

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis Recursive Algorithms Recurreces Computer Sciece & Egieerig 35: Discrete Mathematics Christopher M Bourke cbourke@cseuledu A recursive algorithm is oe i which objects are defied i terms of other objects

More information

On Random Line Segments in the Unit Square

On Random Line Segments in the Unit Square O Radom Lie Segmets i the Uit Square Thomas A. Courtade Departmet of Electrical Egieerig Uiversity of Califoria Los Ageles, Califoria 90095 Email: tacourta@ee.ucla.edu I. INTRODUCTION Let Q = [0, 1] [0,

More information

4.3 Growth Rates of Solutions to Recurrences

4.3 Growth Rates of Solutions to Recurrences 4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES 81 4.3 Growth Rates of Solutios to Recurreces 4.3.1 Divide ad Coquer Algorithms Oe of the most basic ad powerful algorithmic techiques is divide ad coquer.

More information

The Random Walk For Dummies

The Random Walk For Dummies The Radom Walk For Dummies Richard A Mote Abstract We look at the priciples goverig the oe-dimesioal discrete radom walk First we review five basic cocepts of probability theory The we cosider the Beroulli

More information

Analysis of Algorithms. Introduction. Contents

Analysis of Algorithms. Introduction. Contents Itroductio The focus of this module is mathematical aspects of algorithms. Our mai focus is aalysis of algorithms, which meas evaluatig efficiecy of algorithms by aalytical ad mathematical methods. We

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Desig ad Aalysis of Algorithms Probabilistic aalysis ad Radomized algorithms Referece: CLRS Chapter 5 Topics: Hirig problem Idicatio radom variables Radomized algorithms Huo Hogwei 1 The hirig problem

More information

Linear Programming and the Simplex Method

Linear Programming and the Simplex Method Liear Programmig ad the Simplex ethod Abstract This article is a itroductio to Liear Programmig ad usig Simplex method for solvig LP problems i primal form. What is Liear Programmig? Liear Programmig is

More information

Linear Programming! References! Introduction to Algorithms.! Dasgupta, Papadimitriou, Vazirani. Algorithms.! Cormen, Leiserson, Rivest, and Stein.

Linear Programming! References! Introduction to Algorithms.! Dasgupta, Papadimitriou, Vazirani. Algorithms.! Cormen, Leiserson, Rivest, and Stein. Liear Programmig! Refereces! Dasgupta, Papadimitriou, Vazirai. Algorithms.! Corme, Leiserso, Rivest, ad Stei. Itroductio to Algorithms.! Slack form! For each costrait i, defie a oegative slack variable

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Lesson 10: Limits and Continuity

Lesson 10: Limits and Continuity www.scimsacademy.com Lesso 10: Limits ad Cotiuity SCIMS Academy 1 Limit of a fuctio The cocept of limit of a fuctio is cetral to all other cocepts i calculus (like cotiuity, derivative, defiite itegrals

More information

TEACHER CERTIFICATION STUDY GUIDE

TEACHER CERTIFICATION STUDY GUIDE COMPETENCY 1. ALGEBRA SKILL 1.1 1.1a. ALGEBRAIC STRUCTURES Kow why the real ad complex umbers are each a field, ad that particular rigs are ot fields (e.g., itegers, polyomial rigs, matrix rigs) Algebra

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES Peter M. Maurer Why Hashig is θ(). As i biary search, hashig assumes that keys are stored i a array which is idexed by a iteger. However, hashig attempts to bypass

More information

Problem Set 2 Solutions

Problem Set 2 Solutions CS271 Radomess & Computatio, Sprig 2018 Problem Set 2 Solutios Poit totals are i the margi; the maximum total umber of poits was 52. 1. Probabilistic method for domiatig sets 6pts Pick a radom subset S

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

Rank Modulation with Multiplicity

Rank Modulation with Multiplicity Rak Modulatio with Multiplicity Axiao (Adrew) Jiag Computer Sciece ad Eg. Dept. Texas A&M Uiversity College Statio, TX 778 ajiag@cse.tamu.edu Abstract Rak modulatio is a scheme that uses the relative order

More information

Lecture 10 October Minimaxity and least favorable prior sequences

Lecture 10 October Minimaxity and least favorable prior sequences STATS 300A: Theory of Statistics Fall 205 Lecture 0 October 22 Lecturer: Lester Mackey Scribe: Brya He, Rahul Makhijai Warig: These otes may cotai factual ad/or typographic errors. 0. Miimaxity ad least

More information

Basics of Probability Theory (for Theory of Computation courses)

Basics of Probability Theory (for Theory of Computation courses) Basics of Probability Theory (for Theory of Computatio courses) Oded Goldreich Departmet of Computer Sciece Weizma Istitute of Sciece Rehovot, Israel. oded.goldreich@weizma.ac.il November 24, 2008 Preface.

More information

A New Solution Method for the Finite-Horizon Discrete-Time EOQ Problem

A New Solution Method for the Finite-Horizon Discrete-Time EOQ Problem This is the Pre-Published Versio. A New Solutio Method for the Fiite-Horizo Discrete-Time EOQ Problem Chug-Lu Li Departmet of Logistics The Hog Kog Polytechic Uiversity Hug Hom, Kowloo, Hog Kog Phoe: +852-2766-7410

More information

1 Hash tables. 1.1 Implementation

1 Hash tables. 1.1 Implementation Lecture 8 Hash Tables, Uiversal Hash Fuctios, Balls ad Bis Scribes: Luke Johsto, Moses Charikar, G. Valiat Date: Oct 18, 2017 Adapted From Virgiia Williams lecture otes 1 Hash tables A hash table is a

More information

SRC Technical Note June 17, Tight Thresholds for The Pure Literal Rule. Michael Mitzenmacher. d i g i t a l

SRC Technical Note June 17, Tight Thresholds for The Pure Literal Rule. Michael Mitzenmacher. d i g i t a l SRC Techical Note 1997-011 Jue 17, 1997 Tight Thresholds for The Pure Literal Rule Michael Mitzemacher d i g i t a l Systems Research Ceter 130 Lytto Aveue Palo Alto, Califoria 94301 http://www.research.digital.com/src/

More information

Recursive Algorithm for Generating Partitions of an Integer. 1 Preliminary

Recursive Algorithm for Generating Partitions of an Integer. 1 Preliminary Recursive Algorithm for Geeratig Partitios of a Iteger Sug-Hyuk Cha Computer Sciece Departmet, Pace Uiversity 1 Pace Plaza, New York, NY 10038 USA scha@pace.edu Abstract. This article first reviews the

More information

Random assignment with integer costs

Random assignment with integer costs Radom assigmet with iteger costs Robert Parviaie Departmet of Mathematics, Uppsala Uiversity P.O. Box 480, SE-7506 Uppsala, Swede robert.parviaie@math.uu.se Jue 4, 200 Abstract The radom assigmet problem

More information

Zeros of Polynomials

Zeros of Polynomials Math 160 www.timetodare.com 4.5 4.6 Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered with fidig the solutios of polyomial equatios of ay degree

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

On Algorithm for the Minimum Spanning Trees Problem with Diameter Bounded Below

On Algorithm for the Minimum Spanning Trees Problem with Diameter Bounded Below O Algorithm for the Miimum Spaig Trees Problem with Diameter Bouded Below Edward Kh. Gimadi 1,2, Alexey M. Istomi 1, ad Ekateria Yu. Shi 2 1 Sobolev Istitute of Mathematics, 4 Acad. Koptyug aveue, 630090

More information

CHAPTER 10 INFINITE SEQUENCES AND SERIES

CHAPTER 10 INFINITE SEQUENCES AND SERIES CHAPTER 10 INFINITE SEQUENCES AND SERIES 10.1 Sequeces 10.2 Ifiite Series 10.3 The Itegral Tests 10.4 Compariso Tests 10.5 The Ratio ad Root Tests 10.6 Alteratig Series: Absolute ad Coditioal Covergece

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

Chapter 4. Fourier Series

Chapter 4. Fourier Series Chapter 4. Fourier Series At this poit we are ready to ow cosider the caoical equatios. Cosider, for eample the heat equatio u t = u, < (4.) subject to u(, ) = si, u(, t) = u(, t) =. (4.) Here,

More information

Lecture 2. The Lovász Local Lemma

Lecture 2. The Lovász Local Lemma Staford Uiversity Sprig 208 Math 233A: No-costructive methods i combiatorics Istructor: Ja Vodrák Lecture date: Jauary 0, 208 Origial scribe: Apoorva Khare Lecture 2. The Lovász Local Lemma 2. Itroductio

More information

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0.

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0. THE SOLUTION OF NONLINEAR EQUATIONS f( ) = 0. Noliear Equatio Solvers Bracketig. Graphical. Aalytical Ope Methods Bisectio False Positio (Regula-Falsi) Fied poit iteratio Newto Raphso Secat The root of

More information

Advanced Stochastic Processes.

Advanced Stochastic Processes. Advaced Stochastic Processes. David Gamarik LECTURE 2 Radom variables ad measurable fuctios. Strog Law of Large Numbers (SLLN). Scary stuff cotiued... Outlie of Lecture Radom variables ad measurable fuctios.

More information

Sequences. Notation. Convergence of a Sequence

Sequences. Notation. Convergence of a Sequence Sequeces A sequece is essetially just a list. Defiitio (Sequece of Real Numbers). A sequece of real umbers is a fuctio Z (, ) R for some real umber. Do t let the descriptio of the domai cofuse you; it

More information

CS284A: Representations and Algorithms in Molecular Biology

CS284A: Representations and Algorithms in Molecular Biology CS284A: Represetatios ad Algorithms i Molecular Biology Scribe Notes o Lectures 3 & 4: Motif Discovery via Eumeratio & Motif Represetatio Usig Positio Weight Matrix Joshua Gervi Based o presetatios by

More information

Fall 2013 MTH431/531 Real analysis Section Notes

Fall 2013 MTH431/531 Real analysis Section Notes Fall 013 MTH431/531 Real aalysis Sectio 8.1-8. Notes Yi Su 013.11.1 1. Defiitio of uiform covergece. We look at a sequece of fuctios f (x) ad study the coverget property. Notice we have two parameters

More information

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials Math 60 www.timetodare.com 3. Properties of Divisio 3.3 Zeros of Polyomials 3.4 Complex ad Ratioal Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered

More information

Random Walks on Discrete and Continuous Circles. by Jeffrey S. Rosenthal School of Mathematics, University of Minnesota, Minneapolis, MN, U.S.A.

Random Walks on Discrete and Continuous Circles. by Jeffrey S. Rosenthal School of Mathematics, University of Minnesota, Minneapolis, MN, U.S.A. Radom Walks o Discrete ad Cotiuous Circles by Jeffrey S. Rosethal School of Mathematics, Uiversity of Miesota, Mieapolis, MN, U.S.A. 55455 (Appeared i Joural of Applied Probability 30 (1993), 780 789.)

More information

b i u x i U a i j u x i u x j

b i u x i U a i j u x i u x j M ath 5 2 7 Fall 2 0 0 9 L ecture 1 9 N ov. 1 6, 2 0 0 9 ) S ecod- Order Elliptic Equatios: Weak S olutios 1. Defiitios. I this ad the followig two lectures we will study the boudary value problem Here

More information

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018) Radomized Algorithms I, Sprig 08, Departmet of Computer Sciece, Uiversity of Helsiki Homework : Solutios Discussed Jauary 5, 08). Exercise.: Cosider the followig balls-ad-bi game. We start with oe black

More information

The Simplex algorithm: Introductory example. The Simplex algorithm: Introductory example (2)

The Simplex algorithm: Introductory example. The Simplex algorithm: Introductory example (2) Discrete Mathematics for Bioiformatics WS 07/08, G. W. Klau, 23. Oktober 2007, 12:21 1 The Simplex algorithm: Itroductory example The followig itroductio to the Simplex algorithm is from the book Liear

More information

Recurrence Relations

Recurrence Relations Recurrece Relatios Aalysis of recursive algorithms, such as: it factorial (it ) { if (==0) retur ; else retur ( * factorial(-)); } Let t be the umber of multiplicatios eeded to calculate factorial(). The

More information

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) =

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) = AN INTRODUCTION TO SCHRÖDER AND UNKNOWN NUMBERS NICK DUFRESNE Abstract. I this article we will itroduce two types of lattice paths, Schröder paths ad Ukow paths. We will examie differet properties of each,

More information

1 of 7 7/16/2009 6:06 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 6. Order Statistics Defiitios Suppose agai that we have a basic radom experimet, ad that X is a real-valued radom variable

More information

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients. Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

Vector Quantization: a Limiting Case of EM

Vector Quantization: a Limiting Case of EM . Itroductio & defiitios Assume that you are give a data set X = { x j }, j { 2,,, }, of d -dimesioal vectors. The vector quatizatio (VQ) problem requires that we fid a set of prototype vectors Z = { z

More information

R is a scalar defined as follows:

R is a scalar defined as follows: Math 8. Notes o Dot Product, Cross Product, Plaes, Area, ad Volumes This lecture focuses primarily o the dot product ad its may applicatios, especially i the measuremet of agles ad scalar projectio ad

More information

A class of spectral bounds for Max k-cut

A class of spectral bounds for Max k-cut A class of spectral bouds for Max k-cut Miguel F. Ajos, José Neto December 07 Abstract Let G be a udirected ad edge-weighted simple graph. I this paper we itroduce a class of bouds for the maximum k-cut

More information

Optimization Methods: Linear Programming Applications Assignment Problem 1. Module 4 Lecture Notes 3. Assignment Problem

Optimization Methods: Linear Programming Applications Assignment Problem 1. Module 4 Lecture Notes 3. Assignment Problem Optimizatio Methods: Liear Programmig Applicatios Assigmet Problem Itroductio Module 4 Lecture Notes 3 Assigmet Problem I the previous lecture, we discussed about oe of the bech mark problems called trasportatio

More information

Polynomial identity testing and global minimum cut

Polynomial identity testing and global minimum cut CHAPTER 6 Polyomial idetity testig ad global miimum cut I this lecture we will cosider two further problems that ca be solved usig probabilistic algorithms. I the first half, we will cosider the problem

More information

CS / MCS 401 Homework 3 grader solutions

CS / MCS 401 Homework 3 grader solutions CS / MCS 401 Homework 3 grader solutios assigmet due July 6, 016 writte by Jāis Lazovskis maximum poits: 33 Some questios from CLRS. Questios marked with a asterisk were ot graded. 1 Use the defiitio of

More information

Riesz-Fischer Sequences and Lower Frame Bounds

Riesz-Fischer Sequences and Lower Frame Bounds Zeitschrift für Aalysis ud ihre Aweduge Joural for Aalysis ad its Applicatios Volume 1 (00), No., 305 314 Riesz-Fischer Sequeces ad Lower Frame Bouds P. Casazza, O. Christese, S. Li ad A. Lider Abstract.

More information

The Choquet Integral with Respect to Fuzzy-Valued Set Functions

The Choquet Integral with Respect to Fuzzy-Valued Set Functions The Choquet Itegral with Respect to Fuzzy-Valued Set Fuctios Weiwei Zhag Abstract The Choquet itegral with respect to real-valued oadditive set fuctios, such as siged efficiecy measures, has bee used i

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

Notes for Lecture 11

Notes for Lecture 11 U.C. Berkeley CS78: Computatioal Complexity Hadout N Professor Luca Trevisa 3/4/008 Notes for Lecture Eigevalues, Expasio, ad Radom Walks As usual by ow, let G = (V, E) be a udirected d-regular graph with

More information

( ) = p and P( i = b) = q.

( ) = p and P( i = b) = q. MATH 540 Radom Walks Part 1 A radom walk X is special stochastic process that measures the height (or value) of a particle that radomly moves upward or dowward certai fixed amouts o each uit icremet of

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

Entropy and Ergodic Theory Lecture 5: Joint typicality and conditional AEP

Entropy and Ergodic Theory Lecture 5: Joint typicality and conditional AEP Etropy ad Ergodic Theory Lecture 5: Joit typicality ad coditioal AEP 1 Notatio: from RVs back to distributios Let (Ω, F, P) be a probability space, ad let X ad Y be A- ad B-valued discrete RVs, respectively.

More information

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES Read Sectio 1.5 (pages 5 9) Overview I Sectio 1.5 we lear to work with summatio otatio ad formulas. We will also itroduce a brief overview of sequeces,

More information

Commutativity in Permutation Groups

Commutativity in Permutation Groups Commutativity i Permutatio Groups Richard Wito, PhD Abstract I the group Sym(S) of permutatios o a oempty set S, fixed poits ad trasiet poits are defied Prelimiary results o fixed ad trasiet poits are

More information

Classification of problem & problem solving strategies. classification of time complexities (linear, logarithmic etc)

Classification of problem & problem solving strategies. classification of time complexities (linear, logarithmic etc) Classificatio of problem & problem solvig strategies classificatio of time complexities (liear, arithmic etc) Problem subdivisio Divide ad Coquer strategy. Asymptotic otatios, lower boud ad upper boud:

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

Stat 421-SP2012 Interval Estimation Section

Stat 421-SP2012 Interval Estimation Section Stat 41-SP01 Iterval Estimatio Sectio 11.1-11. We ow uderstad (Chapter 10) how to fid poit estimators of a ukow parameter. o However, a poit estimate does ot provide ay iformatio about the ucertaity (possible

More information

Some special clique problems

Some special clique problems Some special clique problems Reate Witer Istitut für Iformatik Marti-Luther-Uiversität Halle-Witteberg Vo-Seckedorff-Platz, D 0620 Halle Saale Germay Abstract: We cosider graphs with cliques of size k

More information

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series Applied Mathematical Scieces, Vol. 7, 03, o. 6, 3-337 HIKARI Ltd, www.m-hikari.com http://d.doi.org/0.988/ams.03.3430 Compariso Study of Series Approimatio ad Covergece betwee Chebyshev ad Legedre Series

More information

Math 113, Calculus II Winter 2007 Final Exam Solutions

Math 113, Calculus II Winter 2007 Final Exam Solutions Math, Calculus II Witer 7 Fial Exam Solutios (5 poits) Use the limit defiitio of the defiite itegral ad the sum formulas to compute x x + dx The check your aswer usig the Evaluatio Theorem Solutio: I this

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15 CS 70 Discrete Mathematics ad Probability Theory Summer 2014 James Cook Note 15 Some Importat Distributios I this ote we will itroduce three importat probability distributios that are widely used to model

More information

Quantum Computing Lecture 7. Quantum Factoring

Quantum Computing Lecture 7. Quantum Factoring Quatum Computig Lecture 7 Quatum Factorig Maris Ozols Quatum factorig A polyomial time quatum algorithm for factorig umbers was published by Peter Shor i 1994. Polyomial time meas that the umber of gates

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Large holes in quasi-random graphs

Large holes in quasi-random graphs Large holes i quasi-radom graphs Joaa Polcy Departmet of Discrete Mathematics Adam Mickiewicz Uiversity Pozań, Polad joaska@amuedupl Submitted: Nov 23, 2006; Accepted: Apr 10, 2008; Published: Apr 18,

More information

MAT 271 Project: Partial Fractions for certain rational functions

MAT 271 Project: Partial Fractions for certain rational functions MAT 7 Project: Partial Fractios for certai ratioal fuctios Prerequisite kowledge: partial fractios from MAT 7, a very good commad of factorig ad complex umbers from Precalculus. To complete this project,

More information

Disjoint set (Union-Find)

Disjoint set (Union-Find) CS124 Lecture 7 Fall 2018 Disjoit set (Uio-Fid) For Kruskal s algorithm for the miimum spaig tree problem, we foud that we eeded a data structure for maitaiig a collectio of disjoit sets. That is, we eed

More information

ABOUT CHAOS AND SENSITIVITY IN TOPOLOGICAL DYNAMICS

ABOUT CHAOS AND SENSITIVITY IN TOPOLOGICAL DYNAMICS ABOUT CHAOS AND SENSITIVITY IN TOPOLOGICAL DYNAMICS EDUARD KONTOROVICH Abstract. I this work we uify ad geeralize some results about chaos ad sesitivity. Date: March 1, 005. 1 1. Symbolic Dyamics Defiitio

More information

CHAPTER 5. Theory and Solution Using Matrix Techniques

CHAPTER 5. Theory and Solution Using Matrix Techniques A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Stochastic Matrices in a Finite Field

Stochastic Matrices in a Finite Field Stochastic Matrices i a Fiite Field Abstract: I this project we will explore the properties of stochastic matrices i both the real ad the fiite fields. We first explore what properties 2 2 stochastic matrices

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018 CSE 353 Discrete Computatioal Structures Sprig 08 Sequeces, Mathematical Iductio, ad Recursio (Chapter 5, Epp) Note: some course slides adopted from publisher-provided material Overview May mathematical

More information

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram.

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram. Key Cocepts: 1) Sketchig of scatter diagram The scatter diagram of bivariate (i.e. cotaiig two variables) data ca be easily obtaied usig GC. Studets are advised to refer to lecture otes for the GC operatios

More information

Lecture 2 Clustering Part II

Lecture 2 Clustering Part II COMS 4995: Usupervised Learig (Summer 8) May 24, 208 Lecture 2 Clusterig Part II Istructor: Nakul Verma Scribes: Jie Li, Yadi Rozov Today, we will be talkig about the hardess results for k-meas. More specifically,

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

The Growth of Functions. Theoretical Supplement

The Growth of Functions. Theoretical Supplement The Growth of Fuctios Theoretical Supplemet The Triagle Iequality The triagle iequality is a algebraic tool that is ofte useful i maipulatig absolute values of fuctios. The triagle iequality says that

More information

Probability, Expectation Value and Uncertainty

Probability, Expectation Value and Uncertainty Chapter 1 Probability, Expectatio Value ad Ucertaity We have see that the physically observable properties of a quatum system are represeted by Hermitea operators (also referred to as observables ) such

More information

Resolution Proofs of Generalized Pigeonhole Principles

Resolution Proofs of Generalized Pigeonhole Principles Resolutio Proofs of Geeralized Pigeohole Priciples Samuel R. Buss Departmet of Mathematics Uiversity of Califoria, Berkeley Győrgy Turá Departmet of Mathematics, Statistics, ad Computer Sciece Uiversity

More information

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A)

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A) REGRESSION (Physics 0 Notes, Partial Modified Appedix A) HOW TO PERFORM A LINEAR REGRESSION Cosider the followig data poits ad their graph (Table I ad Figure ): X Y 0 3 5 3 7 4 9 5 Table : Example Data

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

MATH301 Real Analysis (2008 Fall) Tutorial Note #7. k=1 f k (x) converges pointwise to S(x) on E if and

MATH301 Real Analysis (2008 Fall) Tutorial Note #7. k=1 f k (x) converges pointwise to S(x) on E if and MATH01 Real Aalysis (2008 Fall) Tutorial Note #7 Sequece ad Series of fuctio 1: Poitwise Covergece ad Uiform Covergece Part I: Poitwise Covergece Defiitio of poitwise covergece: A sequece of fuctios f

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information