Rumour spreading and graph conductance

Size: px
Start display at page:

Download "Rumour spreading and graph conductance"

Transcription

1 Rumour spreading and graph conductance Flavio Chierichetti, Silvio Lattanzi, Alessandro Panconesi Dipartimento di Informatica Sapienza Università di Roma December 3, 00 Abstract We show that if a connected graph with n nodes has conductance then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within Õ( log n), many rounds with high probability, regardless of the source, by using the PUSH-PULL strategy. The Õ( ) notation hides a polylog factor. This result is almost tight since there exists graph of n nodes, and conductance, with diameter Ω( log n). If, in addition, the network satisfies some kind of uniformity condition on the degrees, our analysis implies that both both PUSH and PULL, by themselves, successfully broadcast the message to every node in the same number of rounds. Finally we also show a strong connection between rumour spreading and the spectral sparsification procedure of Spielman and Teng [6]. Introduction Rumour spreading, also known as randomized broadcast or randomized gossip (all terms that will be used as synonyms throughout the paper), refers to the following distributed algorithm. Starting with one source node with a message, the protocol proceeds in a sequence of synchronous rounds with the goal of broadcasting the message, i.e. to deliver it to every node in the network. At round t 0, every node that knows the message selects a neighbour uniformly at random to which the message is forwarded. This is the so-called PUSH strategy. The PULL variant is specular. At round t 0 every node that does not yet have the message selects a neighbour uniformly at random and asks for the information, which is transferred provided that the queried neighbour knows it. Finally, the PUSH-PULL strategy is a combination of both. In round t 0, each node selects a random neighbour to perform a PUSH if it has the information or a PULL in the opposite case. These three strategies have been introduced by [9]. One of the most studied questions for rumour spreading concerns its completion time: how many rounds will it take for one of the above strategies to disseminate the information to all nodes in the graph, assuming a worst-case source? We will say that rumour spreading is fast if its completion time is poly-logarithmic in the size of the network regardless of the source, and that it is slow otherwise. Part of this work originally appeared in the proceedings of both SODA 00 [7] and STOC 00 [6]. This research was partially supported by a grant of Yahoo! Research and by an IBM Faculty Award.

2 Randomized broadcast has been intensely investigated (see the related-work section). Our long term goal is to characterize a set of necessary and/or sufficient conditions for rumour spreading to be fast in a given network. In this work, we provide a very general sufficient condition high conductance. Our main motivation comes from the study of social networks. Loosely stated, we are looking after a theorem of the form Rumour spreading is fast in social networks. Our result is a good step in this direction because there are reasons to believe that social networks have high conductance. This is certainly the case for preferential attachment models such as that of []. More importantly, there is some empirical evidence that this might be the case for real social networks; in particular the authors of [0] observe how in many different social networks there exist only cuts of small (logarithmic) size having small (inversely logarithmic) conductance all other cuts appear to have larger conductance. That is, the conductance of the social networks they analyze is larger than a quantity seemingly proportional to an inverse logarithm. Knowing that rumour spreading is fast for social networks would have several implications. First, recently it has been realized that communication networks, especially ad-hoc and mobile, have a social structure. The advent of pervasive computing is likely to reinforce this trend. Rumour spreading is also a simplified form of viral mechanism. By understanding it in detail we might be able to say something about more complex and realistic epidemic processes, with implications that might go beyond the understanding of information dissemination in communication networks. Another relevant context for or work is the relationship between rumour spreading and expansion properties that, intuitively, should ensure fast dissemination. Perhaps surprisingly, in the case of edge expansion there are classes of graphs for which the protocol is slow ( see [5] for more details), while the problem remains open for vertex expansion. In this paper we prove the following three results: We show a strong connection between the rumor spreading problem and the graph sparsification problem introduced by Spielmann and Teng [6]. We prove that if a network has ( conductance ) and n nodes, then, with high probability, PUSH-PULL reaches every node within O log log n many rounds, regardless of the source. Finally we show that if, in addition, the network satisfies the following condition for every edge uv and some constant α > 0: { deg(u) max deg, deg } α deg(u) then both PUSH and PULL, by themselves, reach every node within O(c α log n log ) many rounds with high probability regardless of the source, where c α is a constant depending only on α. Thus, if the conductance is high enough, say = O(log n) (as it has been observed to be in real social networks [0]), then, according to our terminology, rumour spreading is fast. We notice that the use of PUSH-PULL is necessary, as there exist high conductance graphs for which neither the PUSH, nor the PULL, strategies are fast on their own. Examples can be found in [5] where We observe that the star, a graph of conductance, is such that both the PUSH and the PULL strategy by themselves require Ω(n) many rounds to spread the information to each node, assuming a worst case, or even uniformly random, source. That is, conductance alone is not enough to ensure that PUSH, or PULL, spread the information fast.

3 it is shown that in the classical preferential attachment model PUSH and PULL by themselves are slow. Although it is not known if preferential attachment graphs have high conductance, the construction of [5] also applies to the almost preferential attachment model of [], which is known to have high conductance. In terms of message complexity, we observe first that it has been determined precisely only for very special classes of graphs (cliques [8] and Erdös-Rényi random graphs [3]). Apart from this, given the generality of our class, it is not possible to obtain a meaningful upper bound on the number of messages number of rounds times number of nodes. For instance consider the lollipop graph. Fix ω(n ) < < o(log n), and suppose to have a path of length connected to a clique of size n = Θ(n). This graph has conductance. Let the source be any node in the clique. After Θ(log n) rounds each node in the clique will have the information. Further it will take at least steps for the information to be sent to the each node in the path. So, at least n = Θ(n) messages are pushed (by the nodes in the clique) in each round, for at least Θ(log n) = Θ( ) rounds. Thus, the total number of messages sent will be Ω(n ). Observing that the running time is Θ( + log n) = Θ( ), we have that the running time times n is (asymptotically) less than or equal to the number of transmitted messages. We also note that one cannot give fault-tolerant guarantees (that is, the ability of the protocol to resist to node and/or edge deletions) based only on conductance. A star has high conductance, but failure of the central node destroys connectivity. As remarked, our first result is based upon a connection with the spectral sparsification procedure of [6]. Roughly, the connection is as follows. The spectral sparsification procedure (henceforth ST) is a sampling procedure such that, given a graph G, it selects each edge uv independently with probability p uv := min {, δ min{deg(u), deg} where deg(u) denotes the degree of a node u and ( log ) n δ = Θ. () Spielman and Teng show that the eigenvalue spectrum of the sampled graph ST(G) is, with high probability, a good approximation to that of G. In turn, this implies that (ST(G)) Ω( (G)) and that ST(G) is connected (otherwise the conductance would be zero). The first thing we notice is that ST expands: after having applied ST, for each subset of vertices S of at most half the total volume of G, the total volume of the set of vertices reachable from S via edges sampled by ST is at least a constant fraction of the volume of S (the volume of a set of vertices is the sum of their degrees). Intuitively, if we were to use ST to send messages across the edges it samples, we would quickly flood the entire graph. Allowing for some lack of precision for sake of clarity, the second main component of our approach is that rumour spreading stochastically dominates ST, even if we run it for poly-logarithmically many rounds. That is to say, the probability that an edge is used by rumour spreading to pass the message is greater than that of being selected by ST. In a broad sense our work draws a connection between the theory of spectral sparsification and the speed with which diffusion processes make progress in a network. This could potentially have deeper ramifications beyond the present work and seems to be worth exploring. For instance, recently in [, 5] introduced a more efficient sparsification technique that is able to approximate the spectrum using only O(n log n), and O(n), edges, respectively. Extending our approach to the new sampler appears 4 } () 3

4 challenging.of great interest would also be extending the approach to other diffusion processes, such as averaging. Finally, we remark that an outstanding open problem in this area is whether vertex expansion implies that rumour spreading is fast. Related work The literature on the gossip protocol and social networks is huge and we confine ourselves to what appears to be more relevant to the present work. Clearly, at least diameter-many rounds are needed for the gossip protocol to reach all nodes. It has been shown that O(n log n) rounds are always sufficient for each connected graph of n nodes [4]. The problem has been studied on a number of graph classes, such as hypercubes, bounded-degree graphs, cliques and Erdös-Rényi random graphs (see [4, 6, 3]). Recently, there has been a lot of work on quasi-regular expanders (i.e., expander graphs for which the ratio between the maximum and minimum degree is constant) it has been shown in different settings [, 0,, 5, 4] that O(log n) rounds are sufficient for the rumour to be spread throughout the graph. See also [9, ]. Our work can be seen as an extension of these studies to graphs of arbitrary degree distribution. Observe that many real world graphs (e.g., facebook, Internet, etc.) have a very skewed degree distribution that is, the ratio between the maximum and the minimum degree is very high. In most social networks graph models the ratio between the maximum and the minimum degree can be shown to be polynomial in the graph order. Mihail et al. [] study the edge expansion and the conductance of graphs that are very similar to preferential attachment (PA) graphs. We shall refer to these as almost PA-graphs. They show that edge expansion and conductance are constant in these graphs. Concerning PA graphs, the work of [5] shows that rumour spreading is fast in those networks. Although PA networks have high conductance, the present work does not supersede those results, for there it is shown a O(log n) time bound. In [3] it is shown that high conductance implies that non-uniform (over neighbours) rumour spreading succeeds. By non-uniform we mean that, for every ordered pair of neighbours i and j, node i will select j with probability p ij for the rumour spreading step (in general, p ij p ji ). This result does not extend to the case of uniform probabilities studied in this paper. In our setting (but not in theirs), the existence of a non uniform distribution that makes rumour spreading fast is a rather trivial matter. A graph of conductance has diameter bounded by O( log n). Observe that in a synchronous network, it is possible to elect a leader in O( log n) many rounds and set up a BFS tree originating from it. By assigning probability to the edge between a node and its parent one has the desired non uniform probability distribution. 3 Preliminaries We introduce notation, definitions, and recall a few facts for later use. Notation 3. (Weights). The weight of a set of edges E E is defined as w G (E ) := e E w e. The weight of a vertex u in a graph G is defined as w G := e v w e. The weight of a set of vertices S is defined as w G (S) := u S w G(u). Given a graph G, the degree of a node u is denoted as deg G (u). 4

5 Definition 3. (Volume). The volume of a set of vertices S of a graph G is defined to be vol G (S) = v S deg G (u). Observe that vol(v ) = E. Definition 3. (Volume expansion). Let f be a randomized process selecting edges in a graph G = (V, E). Given S V, the set f(s) is the union of S and the set of all vertices u V S such that there exists some v S and uv E was selected by f. We say that f α-expands for S if vol G (f(s)) ( + α) vol G (S). Definition 3.3 (Conductance(see [7])). A set of vertices S in a graph G has conductance if cut G (S, V S) vol(s). The conductance of G is the minimum conductance, taken over all sets S such that vol(s) vol(v )/. Definition 3.4 (Weighted Conductance). A set of vertices S in a graph G has weighted conductance if w G (cut G (S, V S)) w G (S). The weighted conductance of G is the minimum weighted conductance, taken over all sets S such that w G (S) w G (V )/. We will make use of a deep result from [6]. Specifically, it implies that the spectrum of ST(G) is approximately the same as the one of G. It follows from [4, 8] that: Theorem 3. (Spectral Sparsification). There exists a constant c > 0 such that, with probability at least O(n 6 ), for all S V such that w G (S) w G (V )/, we have w ST (G) (cut ST (G) (S, V S)) c (G) w ST (G) (S). We say that an event occurs with high probability (whp) if it happens with probability o(), where the o() term goes to zero as n, the number of vertices, goes to infinity. Finally we give a lemma that underlines symmetries in the PUSH-PULL strategy. Let u t v be the event that an information originated at u arrives to v in t rounds using the PUSH-PULL strategy. And let u t v the event that an information that is originally in v arrives to u in t rounds using the PUSH-PULL strategy. We have that: Lemma 3.3. Let u, v V, then P r[u t v] = P r[u t v] Proof. Look at each possible sequence of PUSH-PULL requests done by the nodes of G in t rounds. We define the inverse of some sequence, as the sequence we would obtain by looking at the sequence starting from the last round to the first. Now the probability that the information spreads from u to v (resp., from v to u) in at most t steps is equal to the sum of the probabilities of sequences of length at most t that manage to pass the information from u to v (from v to u) given that the probability of a sequence, and that of its inverse, are the same, the claim follows. 5

6 4 Rumor Spreading and Graph Sparsification In this section we will prove the following Theorem by showing a strong connection between Rumor Spreading and the Spielman-Teng sparsification algorithm [6]. Theorem 4.. Given any network G and any source node, PUSH-PULL broadcasts the message within O(log 4 n/ 6 (G)) many rounds, where n is the number of nodes of the input graph G and (G) is its conductance. Before plunging into technical details, let us give an overview of the proof. The first thing we do is to show that ST-sparsification enjoys volume expansion. That is, there exists a constant c > 0 such that, for all sets S of volume at most vol G (V )/, vol G (ST(S)) > ( + c (G)) vol G (S). (3) The second, more delicate, step in the proof is to show that rumour spreading (essentially) stochastically dominates ST-sparsification. Assume that S is the set of vertices having the message. If we run PUSH-PULL (henceforth PP, which plays the role of f in Definition 3.) for K = O(log 3 n/ 4 ) rounds, then vol G (PP(S)) vol G (ST(S)), where denotes stochastic domination. (Strictly speaking, this is not quite true, for there are certain events that happen with probability in ST, and only with probability o() with PP.) Consider then the sequence of sets S i+ := PP(S i ), and S 0 := {u} where u is any vertex. These sets keep track of the diffusion via PUSH-PULL of the message originating from u (the process could actually be faster, in the sense that S i is a subset of the informed nodes after K i rounds). Then, for all i, vol G (S i+ ) = vol G (PP(S)) vol G (ST(S)) > ( + c (G)) vol G (S i ). The first inequality follows by stochastic domination, while the second follows from Equation 3. Since the maximum volume is O(n ), we have that vol(s t ) > vol(g)/ for t = O(log n/ ). This means that within O(K log n/ ) many rounds we can deliver the message to a set of nodes having more than half of the network s volume. To conclude the argument we use the fact that PP is specular. If we interchange PUSH with PULL and viceversa, the same argument backwards shows that once we have S t we can reach any other vertex within O(K log n/ ) additional many rounds. After this informal overview, we now proceed to the formal argument. In what follows there is an underlying graph G = (V, E). where n := V (G), for which we run ST and PP. 4. Volume expansion of ST-sparsification Our goal here is to show Equation 3. We begin by showing that the weight of every vertex in ST(G) is concentrated around its expected value, namely its degree in G. Lemma 4.. With probability at least n ω() over the space induced by the random ST-sparsification algorithm, for each node v V (G) we have that w ST (G) = ( ± o()) deg G. Proof. If deg G δ, then w ST (G) is a constant random variable with value deg G. If we assume the opposite we have that E[w ST (G) ] = deg G, by definition of ST-sparsification. Recalling the 6

7 definition of δ (Equation ), let X = w ST (G) δ/ deg. Then, E[X] = δ. By the Chernoff bound, Pr[ X E[X] ɛe[x]] exp ( ɛ3 ) E[X] = exp ( ɛ3 ) δ. ( Since δ = Θ log n ), if we pick ɛ = ω( / log n), the claim follows. 4 Corollary 4.3. Let S V be such that vol G (S) vol G (V )/. With probability at least n ω() over the space induced by the random ST-sparsification algorithm, we have that w ST (G) (S) = ( ± o()) vol G (S). Theorem 3. states that ST-sparsification enjoys weight expansion. By means of Lemma 4. and Corollary 4.3 we can translate this property into volume expansion. Recall that ST(S) is S union the vertices reachable from S via edges sampled by ST. Lemma 4.4 (Volume expansion of ST). There exists a constant c such that for each fixed S V having volume at most vol G (V )/, with high probability vol G (ST(S)) > ( + c (G)) vol G (S). Proof. By Theorem 3., w ST (G) (cut ST (G) (S, V S)) c (G) w ST (G) (S). Clearly, w ST (G) (ST(S)) w ST (G) (cut ST (G) (S, V S)). By Corollary 4.3 we have that vol G (ST(S)) = w ST (G) (ST(S))( ± o()) and vol G (S) = w ST (G) (S)( ± o()). The constant c in Theorem 3. and the error terms in Corollary 4.3 can be chosen in such a way that vol G (ST(S)) > ( + c (G)) vol G (S) for some c > 0. The claim follows. We end this subsection by recording a monotonicity property stating that if a process enjoys volume expansion, then by adding edges expansion continues to hold. Lemma 4.5. Let f and g be a randomized processes that select each edge e in G independently with probability p e and p e, respectively, with p e p e. Then, for all t > 0 and S, Pr(vol G (g(s)) > t) Pr(vol G (f(s)) > t). Proof. The claim follows from a straightforward coupling, and by the fact that if X Y then vol(x) vol(y ). 4. The road from ST-sparsification to Rumour Spreading The goal of this subsection is to show that PP stochastically dominates ST. As stated the claim is not quite true and the kind of stochastic domination we will show is slightly different. Let us begin by mentioning what kind of complications can arise in proving a statement like this. A serious issue is represented by the massive dependencies that are exhibited by PP. To tackle this we introduce a series of intermediate steps, by defining a series of processes that bring us from ST to PP. We will relax somewhat PP and ST by introducing two processes PPW and DST to be defined precisely later. In brief, PPW is the same as PP except that vertices select neighbours without replacement. DST 7

8 differs from ST by the fact that edges are activated (we will come to this later) by both endpoints. Again, slightly simplifying a more complex picture for the sake of clarity, the main flow of the proof is to show that ST DST PPW PP, where denotes stochastic domination. Let us now develop formally this line of reasoning. The first intermediate process is called double ST-sparsification henceforth (DST) and it is defined as follows. DST is a process in which vertices select edges incident on them (similarly to what happens with PP). With DST each edge e u is activated independently by u with probability p e := min {, δ deg G (u) }. (4) An edge e = uv is selected if it is activated by at least one of its endpoints. Clearly DST ST and thus it follows immediately from Lemma 4.5 that DST expands. We record this fact for later use. Lemma 4.6 (Volume expansion of DST). There exists a constant c such that for each fixed S V having volume at most vol G (V )/, with high probability vol G (DST(S)) > ( + c (G)) vol G (S). Therefore from now on we can forget about ST and work only with DST. The next lemma shows that with high probability after DST-sparsification the degrees of the vertices will be upper bounded by O(log n/ 4 ). Lemma 4.7. Let ξ be the event with DST no node will activate more than δ edges. Then, Pr(ξ) = n ω(). Proof. The only case to consider is deg > δ. Let X = (# of edges activated by v). Then E[X] = = δ. Invoking the Chernoff bound we get (see for instance []), u N δ deg for n large enough. Pr[X E[X]] exp ( Ω(δ)) n ω() Remark: For the remainder of the subsection, when dealing with DST we will work in the subspace defined by conditioning on ξ. We will do so without explicitly conditioning on ξ, for sake of notational simplicity. The second step to bring PP closer to ST is to replace PP with a slightly different process. This process is dubbed PP without replacement (henceforth PPW) and it is defined as follows. If PPW runs for t rounds, then each vertex u will select min{deg G (u), t} edges incident on itself without replacement (while PP does it with replacement). The reason to introduce PPW is that it is much easier to handle than PP. Notation 4.8 (Time horizons). Given a vertex set S V we will use the notation F := (F u : u S) to denote a collection of vertex sets, where each F u is a subset of the neighbours of u. A vector of integers Z = (z u : u S) is called a time horizon for S. Furthermore we will use the notation F := ( F u : u S), to denote the time horizon for S that corresponds to F. 8

9 Notation 4.9 (Behaviour of PPW). Let S be a set of vertices in a graph G and let Z be a time horizon for S. PPW(Z, S) is the process where every vertex u S activates a uniformly random subset of min{deg G (u), z u } edges incident on itself, to perform a PUSH-PULL operation for each of them. Notice that PPW might specify different cardinalities for different vertices. This is important for the proofs to follow. With this notation we can express the outcome of DST sampling. Focus on a specific set of vertices S = (u,..., u k ). We know that DST(S) expands with respect to S and we want to argue that so does PPW. The crux of the matter are the following two lemmas. Lemma 4.0. Let u be a vertex in G and t a positive integer. And let DST(u) and PPW(t, u) denote, respectively, the subset of edges incident on u selected by the two processes. Then, Pr(DST(u) = F u {u} F u = t) = Pr(PPW(t, u) = F u {u}). Proof. With DST each vertex activates (and therefore selects) edges incident on itself with the same probability. If we condition on the cardinality, all subsets are equally likely. Therefore, under this conditioning, DST(u) simply selects a subset of t edges uniformly at random. But this is precisely what PPW(t, u) does. Lemma 4.. Let S = {v,..., v S } be a subset of vertices of G and Z = (z,..., z S ) a time horizon for S. Then, S S Pr DST(v i ) = F vi {v i } F = Z = Pr PPW(z i, v i ) = F vi {v i }. i= Proof. This follows from Lemma 4.0 and the fact that under both DST and PPW vertices activate edges independently. In other words, for every realization F of DST there is a time horizon T F such that the random choices of PPW are distributed exactly like those of DST. Said differently, if we condition on the cardinalities of the choices made by DST, then, for those same cardinalities, PPW is distributed exactly like DST. To interpret the next lemma refer to Definition 3.. Lemma 4.. Let T := (δ,..., δ). There exists c > 0 such that, for all sets S V, Pr(PPW(S, T ) (c )-expands for S) Pr(DST(S) (c )-expands for S) i= = o(). Proof. For the first inequality, recall that we are assuming that DST operates under conditioning on ξ. Thus, by Lemma 4.7 we have that each u V activates at most δ edges. Therefore T majorizes every time horizon Z S for which Lemma 4. holds. The last equality is derived from Lemma 4.7. We conclude the series of steps by showing that, given any set S, PP also expands with high probability. Lemma 4.3. Consider the PP process. Take any node v, and an arbitrary time t 0. Between time t 0 and t = t 0 + 9δ log n, node v activates at least min(δ, deg) different edges with high probability. 9

10 Proof. We split the proof into two cases, deg 3δ and deg > 3δ. In the former case, a standard coupon collector argument applies. The probability of non-activation in 9δ log n rounds of some edge will be equal to ( deg )9δ log n ( 3δ )9δ log n n 3. Thus, by union bounding over all its edges, the probability that event fails to happen is O(n ). If deg > 3δ, PP will either choose > δ different edges during the 9δ rounds, or it will choose at most δ different edges. What is the probability of the latter event? In each round the probability of choosing a new edge will be deg δ deg δ deg 3 = 3. But then by Chernoff bound, the probability of this event is at most n ω(). To summarize, if PP is run t t 0 = O(δ log n) steps, with high probability every node in ST (S) selects at least min{deg G (u), δ} many edges, and therefore dominates PPW. 4.3 The speed of PP In this subsection, we upper bound the number of steps required by PP to broadcast a message in the worst case. The basic idea is that, as we have seen in the previous subsection, a PP requires (log 3 n/ 4 ) rounds to expand out of a set. Suppose the information starts at vertex v. Since each expansion increases the total informed volume by a factor of ( + Ω( )) we have that after (log 4 n/ 6 ) rounds, the information will have reached a set of nodes of volume greater than half the volume of the whole graph. Consider now another node w. By the symmetry of the PUSH-PULL process, w will be told the information by a set of nodes of volume bigger than half of the volume of the graph in O(log 4 n/ 6 ) many rounds. Thus the information will travel from v to w in O(log 4 n/ 6 ) many rounds, with high probability. To develop the argument more formally, let us define a macro-step of PP as δ log n consecutive rounds. We start from a single node v having the information, S 0 = {v}. As we saw, in each macro-step, with probability O(n 6 ) the volume of the set of nodes that happen to have the information increases by a factor + Ω( ), as long as the volume of S i is vol G(V ). ( ) Take any node w v. If the information started at w, in O(log +Ω( ) n) = O log n macrosteps the information will have reached a set of nodes S = S O( log n) of total degree strictly larger than vol G(V ) with probability O(n 6 log n ) O(n log n). Note that the probability that the information, starting from some node in S, gets sent to w in O( log n) steps is greater than or equal to the probability that w spreads the information to the whole of S (we use PUSH-PULL, so each edge activation both sends and receive the information thus by activating the edges that got the info from w to S in the reverse order, we could get the information from each node in S to w. Note that the probability of the two activation sequences are exactly the same). ( ) Now take the originator node v, and let it send the information for O log n macro-rounds (for ( ) a total of O log 4 n many rounds). With high probability, the information will reach a set of nodes 6 S v of volume strictly larger than vol G(V ). Take any other node w, and grow its S w for O( log n) rounds with the aim of letting it grab the information. Again, after those many rounds, w will have grabbed the information from a set of volume at least vol G(V ) with probability O(n log n). As the total volume is vol G (V ) the two sets will intersect so that w will obtain the information with 0

11 probability O(n log n). Union bounding over the n nodes, gives us the main result: ( with) probability O(n log n) = o(), the information gets spread to the whole graph in O log 4 n rounds. 6 5 A direct approach In this section we focus on a more direct approach that will lead us to a tighter bound. For sake of clarity in the next subsection we introduce our proof technique and we show how it achieves a O( log n). In the last subsection we improve this bound to get the final Õ( log n). 5. Warm-up: a weak bound In this subsection we prove a completion time bound for the PUSH-PULL strategy of O( log n). Observe that this bound happens to be tight if Ω(). The general strategy is as follows: we will prove that, given any set S of informed nodes having volume E, after O( ) rounds (that we call a phase) the new set S of informed vertices, S S, will have volume vol(s ) ( + Ω()) vol(s) with constant probability (over the random choices performed by nodes during those O( ) rounds) if this happens, we say that the phase was successful; this subsection is devoted to proving this lemma; given the lemma, it follows that PUSH-PULL informs a set of nodes of volume larger than E, starting from any single node, in time O( log n). Indeed, by applying the Chernoff bound one can prove that, by flipping c log n IID coins, each having Θ() head probability, the number of heads will be at least f(c) log n with high probability with f(c) increasing, and unbounded, in c. This implies that we can get enough (that is, Θ( log n)) successful phases for covering more than half of the graph s volume in at most Θ( ) Θ( log n) = Θ( log n) rounds; applying lemma 3.3, we can then show that each uninformed node can get the information in the same amount of steps, if a set S of volume > E is informed completing the proof. Recall that the probability that the information spreads from any node v to a set of nodes with more than half the volume of the graph is O(n ). Then, with that probability the source node s spreads the information to a set of nodes with such volume. Furthermore, by lemma 3.3, any uninformed node would get the information from some node after node s successfully spreads the information with probability O(n ). By union bound, we have that with probability O(n ) PUSH-PULL will succeed in O( log n). Our first lemma shows how one can always find a subset of nodes in the smallest part of a good conductance cut, that happen to hit many of the edges in the cut, and whose nodes have a large fraction of their degree that cross the cut. Lemma 5.. Let G(V, E) be a simple graph. Let A B V, with vol(b) E and cut(a, V B) 3 4 cut(b, V B). Suppose further that the conductance of the cut (B, V B) is at least, cut(b, V B) vol(b). If we let { U = U B (A) = v A d+ B d },

12 then cut(u, V B) 4 cut(b, V B). Proof. We prove the lemma with the following derivation: d + B + d + B = cut(b, V B) v A U v A v A d + B d + B + d + B + v U v B A = cut(b, V B) v A U v A U v B A d + B d + B = cut(a, V B) d + B 3 cut(b, V B), 4 then, d + B 3 cut(b, V B) 4 v U v A U d + B 3 cut(b, V B) 4 v U v A U d + B d + B 3 4 cut(b, V B) vol(b) v U ( ) d d + B 3 4 cut(b, V B) cut(b, V B) v U d + B cut(b, V B). 4 v U Given v U = U B (A), we define N U (to be read N-push-U-v ) and NU (to be read N-pull- U-v ) as follows and and Then, N U = {u N + U d(u) 3 d+ } N U = {u N + U d(u) < 3 d+ }. U = {v U N B N B } U = {v U N B > N B }. Observe that U U = and U U = U. In particular, (at least) one of vol(u ) vol(u) and vol(u ) vol(u) holds. In the following, if vol(u ) vol(u) we will apply the PUSH strategy on U; otherwise, we will apply the PULL strategy.

13 ( Given a vertex v U, we will simulate either the PUSH the PULL strategy, for O ) steps over it. The gain g of node v is then the volume of the node(s) that pull the information from v, or that v pushes the information to. Our aim is to get a bound on the gain of the whole original vertex set S. This cannot be done by summing the gains of single vertices in S, because of the many dependencies in the process. For instance, different nodes v, v S might inform (or could be asked the information by) the same node in V S. To overcome this difficulty, we use an idea similar in spirit to the deferred decision principle. First of all, let us ( remark ) that, given a vertex set S having the information, we will run the PUSH-PULL process for O rounds. We will look at what happens to the neighbourhoods of different nodes in S ( ) sequentially, by simulating the O steps (which we call a phase) of each v S and some of its peers ( ) in N + S V S. Obviously, we will make sure that no node in V S performs more than O PULL steps in a single phase. 0 Specifically, we consider Process with a phase of k = steps. ( ) Algorithm The expansion process of the O log n bound, with a phase length of k steps. : at step i, we consider the sets A i, ; at the first step, i = 0, we take A 0 = B 0 = S and H 0 = ; : if cut(a i, V ) < 3 4 cut(, V ), or vol( ) > E, we stop; otherwise, apply lemma 5. to A i,, obtaining set U i = U Bi (A i ); 3: we take a node v out of U i, and we consider the effects of either the push or the pull strategy, repeated for k steps, over v and N + ; 4: H i+ H i ; 5: each node u N + that gets informed (either by a push of v, or by a pull from v) is added to the set of the halted nodes H i+ ; v is also added to the set H i+ ; 6: let A i+ = A i {v}, and + = H i+ ; observe that + A i+ = H i+ ; 7: iterate the process. Observe that in Process with k = 0 no vertex in V S will make more than O ( ) PULL steps in a single phase. Indeed, each time we run point 3 in the process, we only disclose whether some node u V S actually makes, or does not make, a PULL from v. If the PULL does not go through, and node u later tries to make a PULL to another node v, the probability of this second batch of PULL s (and in fact, of any subsequent batch) to succeed is actually larger than the probability of success of the first batch of PULL s of u (since at that point, we already know that the previous PULL batches made by u never reached any previous candidate node v S). The next lemma summarize the gain, in a single step, of a node v U i. Lemma 5.. If v U i, then [ Pr g ] 3 d+ 4. 3

14 On the other hand, if v U i then In general, if v U i, [ Pr g ] 0 d+ 0. [ Pr g ] 0 d+ 0. Proof. Suppose that v U i. Then, at least d+ of the neighbours of v that are not in have degree 3 d+. Since v U i, we have that d+ d information to one of its neighbours of degree 3 d+ is d+ d 4.. Thus, the probability that v pushes the Now, suppose that v U i. Recall that g is the random variable denoting the gain of v; that is, g = g u, u N where g u is a random variable equal to d(u) if u pulls the information from v, and 0 otherwise. Observe that E[g u ] =, so that E[g] = N, and that the variance of gu is Var[g u ] = E[g u ] E[g u ] = d(u) d(u) = d(u). Since the g u, g u,... are independent, we have Var[g] = Var[g u ] = u N vol(n ) ( d + B 3 i ) N. u N (d(u) ) In the following chain of inequalities we will apply Chebyshev s inequality to bound the deviation of 4

15 g, using the variance bound we have just obtained: [ Pr g ] 0 d+ [ Pr g E[g] E[g] + ] 0 d+ [ Pr g + E[g] N ] 0 d+ [ Pr g E[g] N ] 0 d+ N B Pr g E[g] i 0 d+ 3 d+ N Var[g] 3 d+ N ( N 0 d+ ) ( 6 d + ) ( d+ 0 d+ ) ( ) This concludes the proof of the second claim. The third one is combination of the other two. Now we focus on v, and its neighbourhood N + 0, for many steps. What is the gain G of v in these many steps? Lemma 5.3. Pr[G 0 d+ ] e. Proof. Observe that the probability that the event g 0 d+ happens at least once in independent trials is lower bounded by The claim follows. ( ) 0 ( ) 0 e. 0 0 We now prove the main theorem of the subsection: Theorem 5.4. Let S be the set of informed nodes, vol(s) E. Then, if S is the set of informed nodes after Ω( ) steps, then with Ω() probability, vol(s ) ( + Ω()) vol(s). 0 5

16 Proof. Consider Process with a phase of length k = 0. For the process to finish, at some step t it must happen that either vol(b t ) > E (in which case, we are done so we assume the contrary), or cut(a t, V B t ) < 3 4 cut(b t, V B t ). Analogously, But then, 4 cut(b t, V B t ) cut(b t A t, V B t ) = cut(h t, V B t ). 4 vol(s) 4 vol(b t) 4 cut(b t, V B t ) cut(h t, V B t ) = d + B t + v H t S v H t S d + B t + v H t (V S) v H t (V S) d + B t vol Consider the following two inequalities (that might, or might not, hold): (a) v H t (V S) vol 000 vol(s), and (b) v H t S d+ B t vol(s). At least one of (a) and (b) has to be true. We call two-cases property the disjunction of (a) and (b). If (a) is true, we are done, in the sense that we have captured enough volume to cover a constant fraction of the cut induced by S. We lower bound the probability of (a) to be false given the truth of (b), since the negation of (b) implies the truth of (a). Recall lemma 5.3. It states that for each v i H t S we had probability at least e of gaining at least 0 d+ (v i ) 0 d+ B t (v i ), since i t implies B t. For each v i, let us define the random variable X i as follows: with probability e, X i has value 0 d+ B t (v i ), and with the remaining probability it has value 0. Then, the gain of v i is a random variable that dominates X i. Choosing q = e, in lemma??, we can conclude that [ Pr X i ( ) ] 0 d+ B t (v i ) e. i: v i H t S v i H t S Thus, with constant probability ( e ) we gain at least 40 v i d + B t (v i ), which in turn, is at least 40 v i H t S d + B t (v i ) vol(s) vol(s). 6

17 Hence, (a) is true with probability at least e. So with constant probability there is a gain of 000 vol(s) in steps. Thus using the proof strategy presented at the beginning of the subsection we get a O ( log n ) bound on the completion time. 5. A tighter bound In this subsection we will present a tighter bound of ( ) log ( ) log n O log n = Õ. ( ) Observe that, given the already noted diametral lower bound of Ω log n on graphs of conductance n, the bound is almost tight (we only lose an exponentially small factor in ). ɛ ( Our ) general strategy for showing the tighter bound will be close in spirit to the one we used for the O log n bound of the previous subsection. The new strategy is as follows: we will prove in this subsection that, given any set S of informed nodes having volume at most E, for some p = p(s) Ω(), after O(p ) rounds (that ( we ( call a p-phase) )) the new set S of informed vertices, S S, will have volume vol(s ) + Ω vol(s) with constant p log probability (over the random choices performed by nodes during those O(p ) rounds) if this happens, we say that the phase was successful; using the previous statement we can show that PUSH-PULL informs a set of nodes of volume larger than E, starting from any single node, in time T O( log log n) with high probability. Observe that at the end of a phase one has a multiplicative volume gain of ( ) + Ω p log with probability lower bounded by a positive constant c. If one averages that gain over the O(p ) rounds of the phase, one can say that with ( constant probability c, each round in the phase resulted in a multiplicative volume gain of + Ω ). log ( ) ( ) We therefore apply lemma?? with L = Θ log, B = Θ log log n, P = c and β equal ) to any inverse polynomial in n, β = n Θ() log L. Observe that B Θ log β. Thus, with ( ) probability β = n Θ(), we have Θ(B P ) = Θ log log n successful steps. Since each ( successful step gives a multiplicative volume gain of + Ω ), we obtain a volume of log ( ( + Ω log which, by choosing the right constants, is larger than E. ( L P )) Θ log log n = e Θ(log n), 7

18 by applying lemma 3.3, we can then then show by symmetry that each uninformed node can get the information in T rounds, if a set S of volume > E is informed completing the proof. Given v U = U B (A), we define N-hat-pull-U-v ) as follows N U (to be read N-hat-push-U-v ) and N (to be read B N B = {u N + B d(u) 3 d + B } and and Then, we define, N B = {u N + B d(u) < 3 d + B }. Û = Û = { } v U N B N B { } v U N B > N B. As before, if vol(û ) vol(u) we apply the PUSH strategy on U; otherwise, we apply the PULL strategy. The following lemma is the crux of our analysis. It is a strengthening of lemma 5.. A corollary of the lemma is that there exists a p = p v Ω(), such ( that after p rounds, with constant probability, d + B node v lets us gain a new volume proportional to Θ i p log ). Lemma 5.5. Assume v U i. Then, N can be partitioned in at most 6 + log parts, S, S,..., in such a way that for each part i it holds that, for some P Si ( i+, i], [ Pr G Si S ] i e, 56 P Si where G Si is the total volume of nodes in S i that perform a PULL from v in P S i rounds. Lemma 5., that we used previously, only stated that with probability Ω() we gained a new volume of Θ(d + ) in a single step. If we do not allow v to go on for more than one step than the bounds of lemma 5. are sharp. The insight of lemma 5.5 is that different nodes might require different numbers of rounds to give their full contribution in terms of new volume, but the more we have to wait for, the more we gain. We now prove Lemma 5.5. To prove this, we give two examples. In the first one, we show that the probability of informing any new node might be as small as O(). In the second, we show that a single step the gain might be only a fraction of the volume of the informed nodes. Lemma 5.5 implies that these two phenomena cannot happen together. So, for the first example, take two stars: a little one with Θ ` leaves, and a large one with Θ(n) leaves. Connect the centers of the two stars by an edge. The graph will have conductance Θ(). Now, suppose that the center and the leaves of the little star are informed, while the nodes of the large star are not. Then, the probability of the information to spread to any new node (that is, to the center of the large star), will be O(). For the second example, keep the same two stars, but connect them with a path of length. Again, inform only the nodes of the little star. Then, in a single step, only the central node in the length- path can be informed. The multiplicative volume gain is then only + O(). 8

19 Proof of Lemma 5.5. We bucket the nodes in N in K = K Bi = lg d+ B i 3 buckets in a powerof-two manner. That is, for j =,..., K, R j contains all the nodes u in having degree j d(u) < j. Observe that the R j s are pairwise disjoint and that their union is equal to N. Consider the buckets R j, with j > lg. We will empty some of them, in such a way that the total number of nodes removed from the union of the bucket is an ɛ fraction of the total (that is, of d + ). This node removal step is necessary for the following reason: the buckets R j, with j > lg, contain nodes with a degree so high that any single one of them will perform a PULL operation on v itself with probability strictly smaller than. We want to guarantee that the probability of a gain is at least we are forced to remove nodes having too high degree, if their number is so small that overall the probability of any single one of them to actually perform a PULL on v is smaller than. The node removal phase is as follows. If R j is the set of nodes in the j-th bucket after the node removal phase, then { R j Rj R = j 6 j otherwise Observe that the total number of nodes we remove is upper bounded by K i= ( ) 6 i 6 K i 8 K+ 8 4 d+ 3 = 6 d+. Therefore, j R j 3 d+, since j R j d+. Consider the random variable g, which represent the total volume of the nodes in the different R j s that manage to pull the information from v. If we denote by g j the contribution of the nodes in bucket R j to g, we have g = K j= g j. Take any non-empty bucket R j. We want to show, via lemma??, that [ Pr g j ] R 8 j p j. (If this event occurs, we say that bucket j succeeds.) This claim follows directly from?? by creating one X i in the lemma for each u R j, and letting X i = iff node u pulls the information from v. The probability of this event is p i ( j, j+], so we can safely choose the p of lemma?? to be p = j+. Consider the different p j s of the buckets. Fix some j. If p j came out of case then, since R j 6 j, we have p j 3. If p j came out of case, then p j = 3. In general, p j 3, and p j. i=0 p j Let us divide the unit into segments of exponentially decreasing length: that is, [ [, ),, [ 4),..., j+, j),.... For each j, let us put each bucket R j into the segment containing its p j. Observe that there are at most lg lg segments. N 9

20 Fix any non-empty segment l. Let S l be the union of the buckets in segment l. Observe that if we let the nodes in the buckets of S l run the process for l times, we have that, for each bucket R j, [ Pr G j ] R 8 j ( p j ) l e, where G j is the total volume of nodes in R j that perform a PULL from v in l rounds. p j Now, we can apply lemma??, choosing X j to be equal v j = 8 j R p j if bucket R j in segment l (buckets can be ordered arbitrarily) is such that G j v j, and 0 otherwise. Choosing p = e and q = e, we obtain: [ Pr G Sl 56 S ] l l e. The following corollary follows from lemma 5.5. (We prove in Appendix A that, constants aside, it is the best possible.) ( ] Corollary 5.6. Assume v i U i. Then, there exists p i 64, such that [ ] d + B Pr G(v i ) i (v i ) 5000 p i lg e. where G(v i ) is the total volume of nodes in N + (v i ) that perform a PULL from v i, or that v i pushes the information to, in p i rounds. Proof. If v i Û i, the same reasoning of lemma 5. applies. If v i Û i, then we apply lemma 5.5 choosing the part S with the largest cardinality. By the bound on the number of partitions, we will have which implies the corollary. S d+ (v i ) 3 We now prove the main theorem of the subsection: 6 + lg d+ (v i ) 8 lg, Theorem 5.7. Let S be the set of informed nodes, vol(s) E. Then, if S is the set of informed nodes then there exists some Ω() p such that, after O ( p ) steps, then with Ω() probability, ( ( )) vol(s ) + Ω p log vol(s). Corollary 5.6 is a generalization of lemma 5., which would lead to our result if we could prove an analogous of the two-cases property of the previous subsection. Unfortunately, the final gain we might need to aim for, could be larger than the cut this inhibits the use of the two-cases property. Still, by using a strenghtening of the two-cases property, we will prove Theorem 5.7 with an approach similar to the one of Theorem

21 Proof. We say that an edge in the cut (S, V S) is easy if its endpoint w in V S is such that d S (w) d(w). Then, to overcome the just noted issue, we consider two cases separately: (a) at least half of the edges in the cut are easy, or (b) less than half of the edges in the cut are easy. In case (a) we bucket the easy nodes in Γ(S) (the neighbourhood of S) in following way. Bucket D i, i =,..., lg d S (w) d(w) lg buckets in the, will contain all the nodes w in Γ(S) such that i < i+. Now let D j be the bucket of highest volume (breaking ties arbitrarily). For any node v D j we have that its probability to pull the information in one step is at least j. So, the probability of v to pull the information in j rounds is at least e. Hence, by applying lemma??, we get that with probability greater than or equal to e we gain a set of new nodes of volume at least vol(dj) in j rounds. But, vol(d j ) j cut(s, D j) j cut(s, Γ(S)) j vol(s). lg lg Thus in this first case we gain with probability at least e a set of new nodes of volume at least j vol(s) lg in j rounds. By the reasoning presented at the beginning of subsection 5. the claim follows. Now let us consider the second case, recall that in this case half of the edges in the cut point to nodes u in Γ(S) having d (u) d(u). We then replace the original two-cases property with the strong two-cases property: (a ) v H t (V S) d B t 000 cut(s, V S), and (b ) v H t S d+ B t cut(s, V S). As before, at least one of (a ) and (b ) has to be true. If (a ) happens to be true then we are done since the total volume of the new nodes obtains will be greater than or equal v H t (V S) d v H t (V S) d B t 000 cut(s, V S). By Corollary 5.6, we will wait at most w rounds for the cut (S, V S), for some w O ( ). Thus, if (a ) holds, we are guaranteed to obtain a new set of nodes of total volume Ω ( cut(s,v S) w ) in w rounds. Which implies our main claim. We now show that if (b ) holds, then with Θ() probability our total gain will be at least Ω in w rounds, for some w ( ). ( ) cut(s,v S) w log Observe ( ] that each v i H t S, when it was considered by the process, was given some probability p i 64, by Corollary 5.6. We partition H t S in buckets according to probabilities p i. The j-th bucket will contain all the nodes v i in H t S such that j < p i j+. Recalling that is the set of informed nodes when node v i is considered, we let F be the bucket that maximizes v i F d+ B t (v i ).

Rumour spreading and graph conductance

Rumour spreading and graph conductance Rumour spreading and graph conductance Flavio Chierichetti, Silvio Lattanzi, Alessandro Panconesi {chierichetti,lattanzi,ale}@di.uniroma1.it Dipartimento di Informatica Sapienza Università di Roma October

More information

Rumor Spreading and Vertex Expansion

Rumor Spreading and Vertex Expansion Rumor Spreading and Vertex Expansion George Giakkoupis Thomas Sauerwald Abstract We study the relation between the rate at which rumors spread throughout a graph and the vertex expansion of the graph.

More information

Lecture 2: Rumor Spreading (2)

Lecture 2: Rumor Spreading (2) Alea Meeting, Munich, February 2016 Lecture 2: Rumor Spreading (2) Benjamin Doerr, LIX, École Polytechnique, Paris-Saclay Outline: Reminder last lecture and an answer to Kosta s question Definition: Rumor

More information

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized

More information

How to Discreetly Spread a Rumor in a Crowd

How to Discreetly Spread a Rumor in a Crowd How to Discreetly Spread a Rumor in a Crowd Mohsen Ghaffari MIT ghaffari@mit.edu Calvin Newport Georgetown University cnewport@cs.georgetown.edu Abstract In this paper, we study PUSH-PULL style rumor spreading

More information

Distributed Systems Gossip Algorithms

Distributed Systems Gossip Algorithms Distributed Systems Gossip Algorithms He Sun School of Informatics University of Edinburgh What is Gossip? Gossip algorithms In a gossip algorithm, each node in the network periodically exchanges information

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Lecture 5: Probabilistic tools and Applications II

Lecture 5: Probabilistic tools and Applications II T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will

More information

CMPUT 675: Approximation Algorithms Fall 2014

CMPUT 675: Approximation Algorithms Fall 2014 CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph

More information

arxiv: v1 [cs.dc] 4 Oct 2018

arxiv: v1 [cs.dc] 4 Oct 2018 Distributed Reconfiguration of Maximal Independent Sets Keren Censor-Hillel 1 and Mikael Rabie 2 1 Department of Computer Science, Technion, Israel, ckeren@cs.technion.ac.il 2 Aalto University, Helsinki,

More information

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of

More information

An analogy from Calculus: limits

An analogy from Calculus: limits COMP 250 Fall 2018 35 - big O Nov. 30, 2018 We have seen several algorithms in the course, and we have loosely characterized their runtimes in terms of the size n of the input. We say that the algorithm

More information

11.1 Set Cover ILP formulation of set cover Deterministic rounding

11.1 Set Cover ILP formulation of set cover Deterministic rounding CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

Stochastic Submodular Cover with Limited Adaptivity

Stochastic Submodular Cover with Limited Adaptivity Stochastic Submodular Cover with Limited Adaptivity Arpit Agarwal Sepehr Assadi Sanjeev Khanna Abstract In the submodular cover problem, we are given a non-negative monotone submodular function f over

More information

Lecture 5: January 30

Lecture 5: January 30 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information

The Complexity of a Reliable Distributed System

The Complexity of a Reliable Distributed System The Complexity of a Reliable Distributed System Rachid Guerraoui EPFL Alexandre Maurer EPFL Abstract Studying the complexity of distributed algorithms typically boils down to evaluating how the number

More information

arxiv: v2 [cs.ds] 3 Oct 2017

arxiv: v2 [cs.ds] 3 Oct 2017 Orthogonal Vectors Indexing Isaac Goldstein 1, Moshe Lewenstein 1, and Ely Porat 1 1 Bar-Ilan University, Ramat Gan, Israel {goldshi,moshe,porately}@cs.biu.ac.il arxiv:1710.00586v2 [cs.ds] 3 Oct 2017 Abstract

More information

Lecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012

Lecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012 CS-62 Theory Gems October 24, 202 Lecture Lecturer: Aleksander Mądry Scribes: Carsten Moldenhauer and Robin Scheibler Introduction In Lecture 0, we introduced a fundamental object of spectral graph theory:

More information

Capacity Releasing Diffusion for Speed and Locality

Capacity Releasing Diffusion for Speed and Locality 000 00 002 003 004 005 006 007 008 009 00 0 02 03 04 05 06 07 08 09 020 02 022 023 024 025 026 027 028 029 030 03 032 033 034 035 036 037 038 039 040 04 042 043 044 045 046 047 048 049 050 05 052 053 054

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Out-colourings of Digraphs

Out-colourings of Digraphs Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Decomposition of random graphs into complete bipartite graphs

Decomposition of random graphs into complete bipartite graphs Decomposition of random graphs into complete bipartite graphs Fan Chung Xing Peng Abstract We consider the problem of partitioning the edge set of a graph G into the minimum number τg) of edge-disjoint

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Rao s degree sequence conjecture

Rao s degree sequence conjecture Rao s degree sequence conjecture Maria Chudnovsky 1 Columbia University, New York, NY 10027 Paul Seymour 2 Princeton University, Princeton, NJ 08544 July 31, 2009; revised December 10, 2013 1 Supported

More information

Efficient Approximation for Restricted Biclique Cover Problems

Efficient Approximation for Restricted Biclique Cover Problems algorithms Article Efficient Approximation for Restricted Biclique Cover Problems Alessandro Epasto 1, *, and Eli Upfal 2 ID 1 Google Research, New York, NY 10011, USA 2 Department of Computer Science,

More information

PAC Learning. prof. dr Arno Siebes. Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht

PAC Learning. prof. dr Arno Siebes. Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht PAC Learning prof. dr Arno Siebes Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht Recall: PAC Learning (Version 1) A hypothesis class H is PAC learnable

More information

arxiv: v2 [cs.ds] 5 Apr 2014

arxiv: v2 [cs.ds] 5 Apr 2014 Simple, Fast and Deterministic Gossip and Rumor Spreading Bernhard Haeupler Microsoft Research haeupler@cs.cmu.edu arxiv:20.93v2 [cs.ds] 5 Apr 204 Abstract We study gossip algorithms for the rumor spreading

More information

Avoider-Enforcer games played on edge disjoint hypergraphs

Avoider-Enforcer games played on edge disjoint hypergraphs Avoider-Enforcer games played on edge disjoint hypergraphs Asaf Ferber Michael Krivelevich Alon Naor July 8, 2013 Abstract We analyze Avoider-Enforcer games played on edge disjoint hypergraphs, providing

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Bichain graphs: geometric model and universal graphs

Bichain graphs: geometric model and universal graphs Bichain graphs: geometric model and universal graphs Robert Brignall a,1, Vadim V. Lozin b,, Juraj Stacho b, a Department of Mathematics and Statistics, The Open University, Milton Keynes MK7 6AA, United

More information

Lecture 7: February 6

Lecture 7: February 6 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 7: February 6 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Smoothed Analysis of Dynamic Networks

Smoothed Analysis of Dynamic Networks Smoothed Analysis of Dynamic Networks Michael Dinitz Johns Hopkins University mdinitz@cs.jhu.edu Seth Gilbert National University of Singapore seth.gilbert@comp.nus.edu.sg Jeremy T. Fineman Georgetown

More information

Algebra Exam. Solutions and Grading Guide

Algebra Exam. Solutions and Grading Guide Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full

More information

The concentration of the chromatic number of random graphs

The concentration of the chromatic number of random graphs The concentration of the chromatic number of random graphs Noga Alon Michael Krivelevich Abstract We prove that for every constant δ > 0 the chromatic number of the random graph G(n, p) with p = n 1/2

More information

CS261: Problem Set #3

CS261: Problem Set #3 CS261: Problem Set #3 Due by 11:59 PM on Tuesday, February 23, 2016 Instructions: (1) Form a group of 1-3 students. You should turn in only one write-up for your entire group. (2) Submission instructions:

More information

1 Complex Networks - A Brief Overview

1 Complex Networks - A Brief Overview Power-law Degree Distributions 1 Complex Networks - A Brief Overview Complex networks occur in many social, technological and scientific settings. Examples of complex networks include World Wide Web, Internet,

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

Efficient Reassembling of Graphs, Part 1: The Linear Case

Efficient Reassembling of Graphs, Part 1: The Linear Case Efficient Reassembling of Graphs, Part 1: The Linear Case Assaf Kfoury Boston University Saber Mirzaei Boston University Abstract The reassembling of a simple connected graph G = (V, E) is an abstraction

More information

NP-Completeness Review

NP-Completeness Review CS124 NP-Completeness Review Where We Are Headed Up to this point, we have generally assumed that if we were given a problem, we could find a way to solve it. Unfortunately, as most of you know, there

More information

Berge Trigraphs. Maria Chudnovsky 1 Princeton University, Princeton NJ March 15, 2004; revised December 2, Research Fellow.

Berge Trigraphs. Maria Chudnovsky 1 Princeton University, Princeton NJ March 15, 2004; revised December 2, Research Fellow. Berge Trigraphs Maria Chudnovsky 1 Princeton University, Princeton NJ 08544 March 15, 2004; revised December 2, 2005 1 This research was partially conducted during the period the author served as a Clay

More information

A better approximation ratio for the Vertex Cover problem

A better approximation ratio for the Vertex Cover problem A better approximation ratio for the Vertex Cover problem George Karakostas Dept. of Computing and Software McMaster University October 5, 004 Abstract We reduce the approximation factor for Vertex Cover

More information

Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )

Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( ) Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr Pr = Pr Pr Pr() Pr Pr. We are given three coins and are told that two of the coins are fair and the

More information

Coupling. 2/3/2010 and 2/5/2010

Coupling. 2/3/2010 and 2/5/2010 Coupling 2/3/2010 and 2/5/2010 1 Introduction Consider the move to middle shuffle where a card from the top is placed uniformly at random at a position in the deck. It is easy to see that this Markov Chain

More information

Graph Theory. Thomas Bloom. February 6, 2015

Graph Theory. Thomas Bloom. February 6, 2015 Graph Theory Thomas Bloom February 6, 2015 1 Lecture 1 Introduction A graph (for the purposes of these lectures) is a finite set of vertices, some of which are connected by a single edge. Most importantly,

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani Abstract We study semidefinite programming relaxations of Vertex Cover arising

More information

Kernelization Lower Bounds: A Brief History

Kernelization Lower Bounds: A Brief History Kernelization Lower Bounds: A Brief History G Philip Max Planck Institute for Informatics, Saarbrücken, Germany New Developments in Exact Algorithms and Lower Bounds. Pre-FSTTCS 2014 Workshop, IIT Delhi

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds

More information

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note

More information

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees Francesc Rosselló 1, Gabriel Valiente 2 1 Department of Mathematics and Computer Science, Research Institute

More information

Critical Reading of Optimization Methods for Logical Inference [1]

Critical Reading of Optimization Methods for Logical Inference [1] Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh

More information

arxiv:quant-ph/ v1 15 Apr 2005

arxiv:quant-ph/ v1 15 Apr 2005 Quantum walks on directed graphs Ashley Montanaro arxiv:quant-ph/0504116v1 15 Apr 2005 February 1, 2008 Abstract We consider the definition of quantum walks on directed graphs. Call a directed graph reversible

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Oblivious and Adaptive Strategies for the Majority and Plurality Problems

Oblivious and Adaptive Strategies for the Majority and Plurality Problems Oblivious and Adaptive Strategies for the Majority and Plurality Problems Fan Chung 1, Ron Graham 1, Jia Mao 1, and Andrew Yao 2 1 Department of Computer Science and Engineering, University of California,

More information

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO OUTLINE q Contribution of the paper q Gossip algorithm q The corrected

More information

Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems

Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive

More information

Integer Linear Programs

Integer Linear Programs Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

5.4 Continuity: Preliminary Notions

5.4 Continuity: Preliminary Notions 5.4. CONTINUITY: PRELIMINARY NOTIONS 181 5.4 Continuity: Preliminary Notions 5.4.1 Definitions The American Heritage Dictionary of the English Language defines continuity as an uninterrupted succession,

More information

The expansion of random regular graphs

The expansion of random regular graphs The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is

More information

ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS

ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS PATRICK BENNETT, ANDRZEJ DUDEK, ELLIOT LAFORGE December 1, 016 Abstract. Let C [r] m be a code such that any two words of C have Hamming

More information

ACO Comprehensive Exam October 14 and 15, 2013

ACO Comprehensive Exam October 14 and 15, 2013 1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Time Synchronization

Time Synchronization Massachusetts Institute of Technology Lecture 7 6.895: Advanced Distributed Algorithms March 6, 2006 Professor Nancy Lynch Time Synchronization Readings: Fan, Lynch. Gradient clock synchronization Attiya,

More information

Theoretical Cryptography, Lecture 10

Theoretical Cryptography, Lecture 10 Theoretical Cryptography, Lecture 0 Instructor: Manuel Blum Scribe: Ryan Williams Feb 20, 2006 Introduction Today we will look at: The String Equality problem, revisited What does a random permutation

More information

Algebraic gossip on Arbitrary Networks

Algebraic gossip on Arbitrary Networks Algebraic gossip on Arbitrary etworks Dinkar Vasudevan and Shrinivas Kudekar School of Computer and Communication Sciences, EPFL, Lausanne. Email: {dinkar.vasudevan,shrinivas.kudekar}@epfl.ch arxiv:090.444v

More information

Efficient routing in Poisson small-world networks

Efficient routing in Poisson small-world networks Efficient routing in Poisson small-world networks M. Draief and A. Ganesh Abstract In recent work, Jon Kleinberg considered a small-world network model consisting of a d-dimensional lattice augmented with

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

Lecture 1: Contraction Algorithm

Lecture 1: Contraction Algorithm CSE 5: Design and Analysis of Algorithms I Spring 06 Lecture : Contraction Algorithm Lecturer: Shayan Oveis Gharan March 8th Scribe: Mohammad Javad Hosseini Disclaimer: These notes have not been subjected

More information

On shredders and vertex connectivity augmentation

On shredders and vertex connectivity augmentation On shredders and vertex connectivity augmentation Gilad Liberman The Open University of Israel giladliberman@gmail.com Zeev Nutov The Open University of Israel nutov@openu.ac.il Abstract We consider the

More information

Hardness of Embedding Metric Spaces of Equal Size

Hardness of Embedding Metric Spaces of Equal Size Hardness of Embedding Metric Spaces of Equal Size Subhash Khot and Rishi Saket Georgia Institute of Technology {khot,saket}@cc.gatech.edu Abstract. We study the problem embedding an n-point metric space

More information

Random Lifts of Graphs

Random Lifts of Graphs 27th Brazilian Math Colloquium, July 09 Plan of this talk A brief introduction to the probabilistic method. A quick review of expander graphs and their spectrum. Lifts, random lifts and their properties.

More information

Near-domination in graphs

Near-domination in graphs Near-domination in graphs Bruce Reed Researcher, Projet COATI, INRIA and Laboratoire I3S, CNRS France, and Visiting Researcher, IMPA, Brazil Alex Scott Mathematical Institute, University of Oxford, Oxford

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

The Probabilistic Method

The Probabilistic Method Lecture 3: Tail bounds, Probabilistic Method Today we will see what is nown as the probabilistic method for showing the existence of combinatorial objects. We also review basic concentration inequalities.

More information

Acyclic subgraphs with high chromatic number

Acyclic subgraphs with high chromatic number Acyclic subgraphs with high chromatic number Safwat Nassar Raphael Yuster Abstract For an oriented graph G, let f(g) denote the maximum chromatic number of an acyclic subgraph of G. Let f(n) be the smallest

More information

Decomposing oriented graphs into transitive tournaments

Decomposing oriented graphs into transitive tournaments Decomposing oriented graphs into transitive tournaments Raphael Yuster Department of Mathematics University of Haifa Haifa 39105, Israel Abstract For an oriented graph G with n vertices, let f(g) denote

More information

Radio communication in random graphs

Radio communication in random graphs Journal of Computer and System Sciences 72 (2006) 490 506 www.elsevier.com/locate/jcss Radio communication in random graphs R. Elsässer a,,1,l.garsieniec b a Department of Mathematics, University of California,

More information

Lower Bounds for Testing Bipartiteness in Dense Graphs

Lower Bounds for Testing Bipartiteness in Dense Graphs Lower Bounds for Testing Bipartiteness in Dense Graphs Andrej Bogdanov Luca Trevisan Abstract We consider the problem of testing bipartiteness in the adjacency matrix model. The best known algorithm, due

More information

Theorem (Special Case of Ramsey s Theorem) R(k, l) is finite. Furthermore, it satisfies,

Theorem (Special Case of Ramsey s Theorem) R(k, l) is finite. Furthermore, it satisfies, Math 16A Notes, Wee 6 Scribe: Jesse Benavides Disclaimer: These notes are not nearly as polished (and quite possibly not nearly as correct) as a published paper. Please use them at your own ris. 1. Ramsey

More information

Testing Graph Isomorphism

Testing Graph Isomorphism Testing Graph Isomorphism Eldar Fischer Arie Matsliah Abstract Two graphs G and H on n vertices are ɛ-far from being isomorphic if at least ɛ ( n 2) edges must be added or removed from E(G) in order to

More information

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017 CMSC CMSC : Lecture Greedy Algorithms for Scheduling Tuesday, Sep 9, 0 Reading: Sects.. and. of KT. (Not covered in DPV.) Interval Scheduling: We continue our discussion of greedy algorithms with a number

More information

Augmenting Outerplanar Graphs to Meet Diameter Requirements

Augmenting Outerplanar Graphs to Meet Diameter Requirements Proceedings of the Eighteenth Computing: The Australasian Theory Symposium (CATS 2012), Melbourne, Australia Augmenting Outerplanar Graphs to Meet Diameter Requirements Toshimasa Ishii Department of Information

More information

Filters in Analysis and Topology

Filters in Analysis and Topology Filters in Analysis and Topology David MacIver July 1, 2004 Abstract The study of filters is a very natural way to talk about convergence in an arbitrary topological space, and carries over nicely into

More information

Claw-free Graphs. III. Sparse decomposition

Claw-free Graphs. III. Sparse decomposition Claw-free Graphs. III. Sparse decomposition Maria Chudnovsky 1 and Paul Seymour Princeton University, Princeton NJ 08544 October 14, 003; revised May 8, 004 1 This research was conducted while the author

More information

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Computational Complexity CLRS 34.1-34.4 University of Manitoba COMP 3170 - Analysis of Algorithms & Data Structures 1 / 50 Polynomial

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

Proving lower bounds in the LOCAL model. Juho Hirvonen, IRIF, CNRS, and Université Paris Diderot

Proving lower bounds in the LOCAL model. Juho Hirvonen, IRIF, CNRS, and Université Paris Diderot Proving lower bounds in the LOCAL model Juho Hirvonen, IRIF, CNRS, and Université Paris Diderot ADGA, 20 October, 2017 Talk outline Sketch two lower bound proof techniques for distributed graph algorithms

More information