Self-stabilizing universal algorithms

Size: px
Start display at page:

Download "Self-stabilizing universal algorithms"

Transcription

1 Self-stabilizing universal algorithms Paolo Boldi Sebastiano Vigna Dipartimento di Scienze dell Informazione, Università degli Studi di Milano, Italy Abstract. We prove the existence of a universal self-stabilizing algorithm, i.e., an algorithm which allows to stabilize a distributed system to a desired behaviour (as long as an algorithm stabilizing to that behaviour exists). Previous proposals required drastic increases in asymmetry and knowledge in order to work, while our algorithm does not use any additional knowledge, and does not require more symmetry-breaking conditions than available; thus, it is also stabilizing with respect to changes in the topology and in the identifiers assigned to each processor. We prove a tight quiescence time n + for a synchronous network of n processors and diameter. The algorithm can be made finite state with a negligible multiplicative loss. If the activation is asynchronous, we propose an algorithm with O( n 2 ) quiescence time. Our results are true for a wide variety of sharedmemory models, including unidirectional, wireless and uniform networks. 1 Introduction A system is self-stabilizing if, after a finite number of steps, it cannot deviate from a specified behaviour. Self-stabilization was introduced by Dijkstra in his celebrated paper [8], and has become since an important framework for the study of fault-tolerant computations. The ample possibility given to an adversary of choosing the initial state makes it extremely difficult to devise self-stabilizing algorithms. In order to overcome this problem, there have been some recent attempts in the literature to build general selfstabilizing programs (also called extensions), i.e., superimposed distributed protocols which make the underlying behaviour self-stabilizing, or to solve classical problems (election, spanning tree construction,... ) in a self-stabilizing way. For instance, [9] proposes a combination of self-stabilizing global snapshots and resets to this purpose, while [1] and [3] use local conditions in order to initiate a global or a local correction, respectively. However, all the previously mentioned methods require additional asymmetry and knowledge: the self-stabilizing extension of [9] requires a distinct process and knowledge of the number of nodes of the network, which may not be available. The algorithm described in [1] needs unique identifiers, and [3] requires the ability to distinguish incident links. It is a major open problem (if only of theoretical importance) to establish (or prove impossible) the existence of a universal self-stabilizing algorithm: by universal we mean that such an algorithm should be able to self-stabilize to any behaviour (specified as an input, for instance using temporal logic) for which a self-stabilizing algorithm exists, under given conditions of asymmetry and knowledge.

2 In this paper we prove a surprising and apparently unnatural result, viz., that such an algorithm exists, and it has tight synchronous quiescence time n +, where n is the number of processors in the system and its diameter. The program is the same for all behaviours, except for the calls to an oracle depending on the desired behaviour. The algorithm can be made finite state, with a negligible multiplicative loss in quiescence time. If asynchronous activation is assumed, we prove a quiescence time O( n 2 ), but we cannot prove that the algorithm is universal. We however conjecture that the algorithm really has quiescence time O(n 2 ). There is again a negligible multiplicative loss in the case state finiteness is required. A widely accepted tenet about self-stabilization is that it is a difficult coordination problem, independently of the resources available. We show that, on the contrary, unless strong bounds are imposed on resources and quiescence time, the coordination part of the problem is trivial, in the sense that it can be completely solved once for all (either it is impossible, or there is a universal algorithm for it). In order to prove our results, we exploit a mix of classical techniques for selfstabilization, and a series of results from the theory of anonymous networks. A network is anonymous (or uniform) if all processors are identical and start from the same initial state; this makes them indistinguishable. The systematic study of such networks was initiated by Angluin [2], and there is now a rather complete characterization of what is computable on such networks for several problems, such as election [4] and function computation [12]. In fact, the theory can also be used if the processors have identifiers, as long as the initial states are the same, and this observation is the key to our proofs. There is clearly a link between anonymous networks and self-stabilizing systems, since one of the possible choices for the adversary is to start all processors from the same state. Thus, in a sense that will be made precise, there is no way a network can self-stabilize to something which it cannot compute anonymously. Our main result is obtained in two steps: first of all, we give a result of independent interest characterizing those predicates which are computable anonymously on a class of networks. Then we show that there is a universal self-stabilizing program which converges to all such predicates, by turning the classical algorithms from anonymous networks theory into self-stabilizing ones. The main mathematical ingredient of our proofs is the theory of graph fibrations, which allows an easy characterization of the behaviour of anonymous networks under a wide variety of assumptions (such as unidirectionality and wirelessness). Thus, our results are true for any level of knowledge of the network, i.e., for any class of networks, and for any structural requirement (bidirectionality, maximum number of distinct identifiers and so on). Note however that our upper bounds are not tight for all classes: for instance, the presence of sense of direction, or (as a special case) distinct identifiers, reduces the quiescence time to + 1. Note that our algorithms tolerate dynamic changes in the network structure: at any time the adversary can change the topology of the network, or corrupt the identifiers of the processors, as long as the resulting network fits the knowledge assumed (the last characteristic, in particular, was not as far as we know achieved by any algorithm). After the quiescence time, the desired behaviour will restart.

3 For clarity, in this extended abstract we restrict to very simple behaviours, specified by a set of safe global states P which must be reached. In the full paper, we will show how to extend our techniques to general behaviours described in the future fragment of temporal logic. The definitions of self-stabilization we have adopted here 1 corresponds to eventually always P. This restricts heavily the kind of behaviours considered; however, our results for the synchronous case extend immediately to arbitrary behaviours (including, for instance, election, mutual exclusion and topology reconstruction), and we hope to be able to apply the same methods to the asynchronous case. In order to give an idea of the techniques we use, we give a detailed proof of the infinite state, synchronous algorithm; in the remaining cases we just give a sketch of the proofs. Our results are mainly of theoretical interest, because of the large amount of information exchanged by the processors. Nonetheless, they provide for the first time general upper and lower bounds for self-stabilization. Moreover, we obtain a characterization of the behaviours to which self-stabilization is possible. 2 Graph-theoretical definitions A (directed) (multi)graph G is defined by a nonempty set V G = {1, 2,..., n} of vertices and a set A G of arcs, and by two functions s G, t G : A G V G which specify the source and the target of each arc (we shall drop the subscript whenever no confusion is possible). A self-loop is an arc with the same source and target. We write y x when there is an arc a with s(a) = y and t(a) = x. With G we denote the diameter of a graph G, again dropping the subscript whenever no confusion is possible. A global state 2 in X for a graph G with n nodes is a vector x = x 1, x 2,..., x n X n. We shall write G x for the graph G with global state x. A (in-directed) tree is a graph with a selected node, the root, and such that any other node has exactly one directed path to the root. If T is a tree, we write h(t) for its height (the length of the longest directed path). In all the trees we consider, all maximal paths have length equal to the height. Finally, we write T k for the tree T truncated at height k, i.e., we eliminate all nodes at distance greater than k from the root. 3 The model The underlying structure of a network is given by a strongly connected graph G, where each arc corresponds to a link between processors (parallel links and self-loops are allowed). When a processor i executes one step of its computation, it changes its state 1 There seems to be a certain disagreement in the literature about the exact definition of selfstabilization. For instance, [9], [7] and [1] propose different, and increasingly stronger, definitions of self-stabilization. 2 This is a standard vertex colouring, but since we shall deal with arc colourings, too, and the vertex colouring will be really a global state assignment, we prefer to use a distinctive name for it.

4 in a way which is dependent on its own state and on the state of its in-neighbours (i.e., processors j such that there is a link going from j to i). A network is synchronous if all processors execute each step of computation at the same time. A network is asynchronous if at every step an arbitrary set of enabled processors execute a step (a processor is enabled if it is not on a fixed point); this is equivalent to the distributed daemon assumption. The processors which take a step are chosen adversarially, i.e., our algorithm must work for every possible activation. The arcs of the graph G representing the network have a local output labelling: if a processor i has outdegree d, then it uses the numbers {1, 2,..., e}, with 1 e d, for labelling its outcoming arcs. Analogously, the arcs of the graph G have a local input labelling of the same kind. We can think of this as the processors being partially aware of which output (input) port is associated with a given link. In general we shall not make any assumption on the level of wirelessness of any processor, i.e., in our networks all cases from e = 1 to e = d are possible. We can describe compactly and uniformly any arc labelling as follows: the local labellings induce a standard arc-colouring of G on the set N 2. Each of the vertices adjacent to an arc contributes to the colouring with a number; namely, an arc labelled as i by its source processor and as j by its target processor will get the colour i, j. Moreover, each processor has an identifier given by a positive natural number smaller than or equal to the number of nodes of the network. The identifiers are arbitrary (they need not be distinct). Formally, a network is a strongly connected graph with a colouring induced by a local labelling and an assignment of identifiers to processors. A program is given by a state space S and a transition function δ : N S (N 2 S) S, where (N 2 S) is the set of multisets over N 2 S. When a processor takes a step, it computes its new state on the basis of the (coloured) neighbourhood relation. Namely, if a processor i with identifier r in state s has k incoming arcs, with colours c 1, c 2,..., c k (not necessarily distinct) and sources given by processors i 1, i 2,..., i k (which do not need to be distinct, or different from i) in state s 1, s 2,..., s k, then the next state of i is δ(r, s,{ c 1, s 1, c 2, s 2,..., c k, s k }). The orbit of a processor is the sequence of states of X through which the processor passes during the computation of the network. 4 Predicates and classes of networks A predicate on a set X is a set P n=0 X n. Intuitively, a predicate specifies a subset of legal global states for all networks whose nodes have X as (part of the) local state space. For instance, if X = {0, 1} the predicate made up by all tuples with all elements equal to 0 except exactly one is the predicate describing a selected processor. We say that P can be computed (anonymously) on a class of networks C iff there is an algorithm δ with state space X Y for some set Y, and an initial state x, y X Y such that every network in C terminates every computation in a global state satisfying P (in the sense that its projection on X n belongs to P) when all processors are started from x, y.

5 We say that C can self-stabilize to P iff there is a program δ with local state space X Y for some set Y such that, for every network G C and for any choice of the initial state, the global states of every computation induced by δ on G ultimately satisfy P, i.e., if there is a T such that for all t T the global state of G at the t-th step satisfies P in all computations. The smallest such T is called the quiescence time of δ. Note that each processor may possess more or less information about the network it belongs to. This knowledge is represented by the class C greater the class, smaller the knowledge. Common situations studied in the literature include the knowledge of the whole network, of the underlying graph, of the number of nodes, or of some other graph-theoretical property. For instance, in the case in which the number of processors is known we shall have to specify an algorithm computing (or a program self-stabilizing to) a certain predicate on all networks with n nodes. 5 Graph fibrations In this paper we exploit the notion of graph fibration [5]. A fibration formalizes the idea that processors which are connected by the same colours to processors behaving in the same way (with respect to the colours) will behave alike. Recall that a graph morphism G H is given by a pair of functions f V : V G V H and f A : A G A H which commute with the source and target functions, i.e., s H f A = f V s G and t H f A = f V t G. In other words, a morphism maps nodes to nodes and arcs to arcs in such a way to preserve the incidence relation. Colours on arcs and nodes must be preserved. Definition 1. A fibration between (coloured) graphs G and B is a morphism ϕ : G B such that for each arc a A B and for each i V G such that ϕ(i) = t(a) there is a unique ã i A G such that ϕ(ã i ) = a and t(ã i ) = i. We recall some topological terminology. If ϕ : G B is a fibration, G is called the (fibre) bundle and B the base of the fibration. We shall also say that G is fibred (over B). The fibre over a vertex i V B is the set of vertices of G which are mapped to i, and will be denoted by ϕ 1 (i). There is a very intuitive characterization of fibrations based on the concept of local isomorphism. A fibration ϕ : G B induces an equivalence relation between the vertices of G, whose classes are precisely the fibres of ϕ. When two vertices i and j are equivalent (i.e., they are in the same fibre), there is a one-to-one correspondence between arcs ending in i and arcs ending in j which preserves colours, and such that the sources of any two related arcs are equivalent. Let now G be a graph and i a vertex of G. We define a (possibly infinite) in-directed rooted arc-coloured (and possibly node-coloured, if G is so) tree G i as follows: the nodes of G i are the (finite) paths of G ending in i, the root of G i being the empty path; there is an arc from the node π to the node π if π is obtained by adding an arc a at the beginning of π (the arc will have the same colour as a).

6 The tree G i is called the universal bundle of G at i. We can define a graph morphism υ i G from G i to G, by mapping each node π of G i (i.e., each path of G ending in i) to its starting vertex, and each arc of G i to the corresponding arc of G. The following important property holds: Lemma 2. For every vertex i of a graph G, the morphism υ i G : G i G is a fibration, called the universal fibration of G at i. The universal bundle at i is a tree representing intuitively everything processor i can learn from interaction with its neighbours ; it plays the same rôle as the universal covering (called view in [11]) in the undirected case. Now, we define Ĝ as the graph obtained from G i by identifying isomorphic subtrees; it is easy to verify that this construction does not depend on the choice of i. Clearly, one can construct a morphism µ G : G Ĝ mapping each vertex i V G to (the equivalence class containing) G i, and each arc of G to a corresponding arc in Ĝ. We note that µ G is uniquely defined on the nodes, but not on the arcs (i.e., different definitions of µ G are possible); however, this fact is irrelevant for the purposes of this paper. Lemma 3. For each graph G, the morphism µ G : G Ĝ is a fibration, and it is called a minimal fibration of G. Ĝ is called the minimum base. In figure 1, we illustrate this notions by showing a graph G, Ĝ and G i. i G G i Ĝ Fig. 1. A graph, its minimum base and a universal bundle 6 Building distributely the universal bundle Now, we have to face the problem of how a processor i can build G i and Ĝ in an effective, distributed way; the following result, given in [10], is a step in this direction:

7 Lemma 4. If G has n nodes, for all processors i, j, G i = G j iff there is an isomorphism between the first n 1 levels of the two trees. The other lemma we need is from [5]; a trivial bundle is a graph which cannot be fibred nontrivially (a fibration is trivial iff it is an isomorphism). Lemma 5. Let G a graph with n nodes and diameter, B a trivial bundle with at most n nodes, and suppose that, for some i V G and j V B, G i and B j are isomorphic up to level n + : then B = Ĝ. From these results, we obtain two sufficient conditions under which a processor can build the minimum base: Lemma 6. If the processors have knowledge of an upper bound N on the number of nodes, then they can compute their universal bundle, the minimum base, and the node of the minimum base they are mapped to by minimal fibrations. Proof. Each processor i starts to build G i ; at the beginning of the computation, G i is the empty tree. Then, knowing G j truncated at height k for all its in-neighbours j, the processor i can build G i truncated at height k + 1, and so on. By Lemma 4, after 2N 1 n + iterations the truncated tree so far obtained has the same isomorphism classes of subtrees with height N 1 as the whole G i, and so Ĝ can be built. Finally, each processor i knows which fibre of µ G it belongs to, because by construction the universal bundles of Ĝ coincide with those of G, in the sense that i V G and j VĜ have the same universal bundle iff j = µ G (i). Lemma 7. A processor i which knows G i (n + ) can build Ĝ. Proof. Consider the class of trivial bundles C with less arcs than G i (n + ) and using only the identifiers appearing in G i (n + ); this class is finite by definition. By Lemma 5 the minimum base is the minimum graph B of C such that for some j we have G i (n + ) = B j (n + ). Note that the bound n + is tight: in [5] it is proved that for the trivial bundles of Figure 2, which have the same number of nodes n and diameter = n k + 2, G n and H n n are isomorphic up to level n + 1 but not up to level n +. Finally, what we proved in this section is true also for the asynchronous model; in that case, a suitable catch-up clock must be used for keeping the processors synchronized. 7 Computable predicates In order to state the fundamental theorem characterizing the predicates computable by a class of networks we need some notation and a lemma. If ϕ : G B is a fibration, and x is a global state of B, then we can obtain a global state x ϕ of G by lifting the global state of B along each fibre, i.e., (x ϕ ) i = x ϕ(i). Note that computability coincides for the synchronous and the asynchronous case. The following lemma is a generalization of the analogous result proved in [6]. 1

8 k+1 k 3 k-1 k n-2 1 n n-3 n-1 n n-2 n-1 Gn Hn Fig. 2. Graphs with similar bundles Lemma 8. Let ϕ : G B be a fibration. Then any algorithm computing a predicate P on G computes on B the predicate P ϕ = {x X V B x ϕ P}. Proof. Whenever a valued graph is fibred onto another one, all processors in the same fibre must behave identically. This happens because the local isomorphism property says that for any two processors i and j in the same fibre, and any c-coloured arc terminating at i and starting from k, there is a c-coloured arc terminating at j and starting from a processor in the same fibre as k; moreover, this association is a colour preserving bijection between the arcs entering in i and j. Thus, the orbits of all processors in a fibre of ϕ are exactly the same, and they are also equal to the orbit of the processor they lie over. This implies that at the end of the computation the global state of B will satisfy the condition in the statement above. Theorem 9. Let C be a class of networks with bounded number of nodes, and P a predicate on X. Then, P is anonymously computable in C iff for all graphs B there is a global state x C B of B such that for all G C and all fibrations G ϕ B we have (x C B )ϕ P. Note that the global state x C B plays a fundamental rôle in what follows. We assume that an oracle (i.e., a subroutine) is available which, on input C, B, returns x C B. The computability and complexity bounds of this subroutine must be considered separately. Unless unlimited computation capabilities are assumed for the processors, P will be computable iff x C B is (in what follows, this always happens because C is always finite). Proof. The condition is necessary. Suppose there is an algorithm δ computing P on C. Using the notation of Lemma 8, note that, for each graph B, Q = G C,G ϕ B cannot be empty, because it is a predicate computed by δ on B. Then, any x Q will satisfy the claim. P ϕ

9 On the other hand, we describe an algorithm which will compute a predicate satisfying the given condition on all the networks of C. Firstly, all processors build their universal bundles and the minimum base Ĝ using Lemma 6. Then, processor i assumes state (x C Ĝ ) µ G (i); the conditions on P guarantee that (x C Ĝ )µ G P. 8 Self-stabilization: the synchronous case Armed with our results about anonymous computability, we move to self-stabilization. In order to prove the following theorem, we introduce a notation: if C is a class of networks, C m C is the subclass of networks of C with at most m arcs. Theorem 10. Let C be a class of synchronous networks, and P a predicate on X anonymously computable in every finite subclass of C. Then there is a program which self-stabilizes to P in n + steps on every network of C with n nodes and diameter. Proof. The algorithm is very simple: at each step, processor i builds its candidate universal bundle T i by combining the neighbours candidates in a new tree U i, finding the maximum k such that T i k = U i k, and setting T i U i (k + 1). Then each processor finds a tentative minimum base B i, i.e., a trivial bundle B i such that T i j = B i h(t i ) for some j V Bi (see Lemma 7). Finally, using the tuple x C h(t i ) A B i B i given by Theorem 9, each processor changes the state in X (the index h(t i ) A Bi, at stabilization, will be the same for all processors, and an upper bound for the number of arcs of the network, as we are going to prove). Let G be the network which is running the program; let n be its number of nodes and its diameter. We will show that, after k steps, for each i we have T i = G i l, with l k. Thus, after additional n + k steps all nodes will know correctly the minimum base (because they know their own universal bundles up to height n+ see Lemma 7). Let now c be the minimum level of correctness of the T i s in the initial state, i.e., c = min max{k T i k = G i k}. i We firstly note that after t steps T i (t +c) = G i (t +c). This can be easily shown by induction on t. Thus, the height of any tree T i is bounded from below by t +c at the t-th computation step. We call perfect at level t a node i such that at the t-th computation step T i = G i (t + c). Note that perfection is a stable property, i.e., a perfect node remains perfect thereafter. Let ı be a node which minimizes the level of correctness in the initial state. Then after the first computation step ı is perfect. This happens because certainly the maximum k such that T ı k = U ı k is exactly c, and this implies that T ı becomes G ı (c + 1) (note that, by our choice of ı, all neighbours have at least c correct levels). Finally, we show that any node i with a perfect in-neighbour becomes perfect at the next computation step. This happens because certainly the new tree U i will be truncated at least at level t+c+1 (because the perfect neighbour has height t+c). But this number is also a lower bound on the correct height of T i at step t + 1.

10 It is now immediate that at the first computation step a finite number of nodes becomes perfect, and that perfection will propagate through the network in 1 steps. For sake of completeness, we give a high-level description of the algorithm: Algorithm Synchronous stabilization to the predicate P for class C subroutine EnumerateGraphs (T : tree; u : integer) : graph; /* Returns the u-th element of an enumeration of all trivial bundles with at most A T arcs and with nodes labelled by the identifiers appearing in T; the graphs are enumerated in such a way that the number of nodes is not decreasing */ subroutine NewTree(r : integer) : tree; /* Returns a new tree from the multiset of states read from the in-neighbours and the local identifier. The tree has a root coloured by r, and for each element c, s of the multiset a c-coloured incoming arc to which the corresponding value of T (extracted from s) is appended. The tree is truncated at the length of the shortest maximal path (so it is complete) */ subroutine Oracle (B : graph; m : integer) : vector of X; /* Implements the oracle of Theorem 9 and returns x C m B */ const id : integer; var x : an element of X; T,U : tree; B : graph; k,p, j : integer; u : vector of X; begin forever U NewTree(id); k max{l U l = T l}; T U (k + 1); p 0; repeat B EnumerateGraphs(T, p); p p + 1 until B j h(t) = T for some j V B ; u Oracle(B, h(t) A B ); x u[ j] loop end We can finally state our characterization:

11 Theorem 11. Let C be a class of synchronous networks, and P a predicate on X. Then C can self-stabilize to P iff P is anonymously computable on all finite subclasses of C. Proof. If there is a program δ self-stabilizing to P in C, and D is a finite subclass of C, let T be the maximum stabilization time of δ on all the networks of D when all processors are started from a fixed, arbitrary state x X. Then we can define an algorithm computing P anonymously as follows: all processors start from state x and apply the program δ for T steps, then they stop. Clearly, the resulting global state will belong to P. The other side of the implication follows by Theorem 10. Note that the asymmetry requirements of [9] and [1] are such that all networks are trivial bundles. This is the reason why they can prove self-stabilization to any desired behaviour. The previous theorem, for instance, allows to apply the results of [6] in order to establish which functions can be computed in a self-stabilizing way, or the results of [4] in order to establish which classes of networks admit self-stabilizing election protocols. We now show that our bound on quiescence time is tight. Let G be the class of networks without distinct output labels whose underlying graphs Gn and H n are those described in Figure 2 of Section 5, for all n and. Let also GN G be the class of networks of G with at most N nodes. We shall use the following important property of G : Lemma 12. The processor 1 of the network G n and the processor n of the network H n have the same state for n + 1 steps of any anonymous computation. From the theory of anonymous networks we know that this is true because their universal bundles truncated at height n + 1 are isomorphic. We are now going to define a predicate which essentially forces each processor to discover its own number. Theorem 13. For all N, the quiescence time of GN to the predicate P = { 1, 2,..., n n > 2} is at least n + for at least N/2 networks. Proof. If a network with underlying graph Gn stabilizes before n+ steps, by Lemma 12 the network Hn cannot stabilize before n + steps. Corollary 14. The quiescence time of G to the predicate P = { 1, 2,..., n n > 2} is at least n + for an infinite number of networks. 9 A finite state synchronous algorithm The algorithm we discussed uses an unbounded number of states. This cannot be avoided in general, since it is possible to build predicates to which no finite-state algorithm can self-stabilize. Nonetheless, we can still characterize such predicates, and give universal algorithms also for the finite state case; this can be done with a multiplicative quiescence time loss of α(m), where α grows very slowly (in fact, more slowly than log ). More formally, if f (x) is the inverse of x x, then { 0 if x 2 α(x) = f (α(x)) + 1 otherwise.

12 A predicate P is uniform (with respect to C ) iff G C,G ϕ B P ϕ is nonempty for all B. Clearly, a uniform predicate is anonymously computable on every finite subclass of C. Note that in this case the oracle x C m B can be made to depend only on B. Theorem 15. Let C be a class of synchronous networks, and P a predicate on X computable in any finite subclass of C. Then, a finite state program which stabilizes C to P exists iff P is uniform. In this case, there is a finite state program which self-stabilizes to P in n+ (α(m)+1) steps on every network of C with n nodes, m arcs and diameter. Proof. (Sketch of the second part) Each processor keeps a guess m i > 1 such that 2m i 1 levels of the universal bundle are sufficient in order to build the minimum base. Each processor never builds trees taller than 2m i 1. Moreover, at each step we update the guess by setting m i sup j=i, j i m j. In the worst case, after steps every processor will have a guess not smaller than the maximum guess M in the initial state, and after M steps all processor will possess M correct levels. Now at least one processor can detect locally that M is not a correct guess, and thus will update its guess by m i m m i i. Again, after steps every processor will possess the new guess. The number of required rounds is α(m), after which m i is no longer increased. Clearly the algorithm is finite-state on any given network (due to the conditions on P, the call to the oracle depends only on B). Note that, unless a precise space bound is required, the loss can be reduced arbitrarily; for instance, by updating our guess using Ackermann s function, we would have a much smaller loss; thus, the gap with our lower bound can be made arbitrarily small. We however conjecture that there is no universal finite state self-stabilizing algorithm with O(n + ) quiescence time. Remark. In the infinite state case all processors end up with a clock (the height of the tree T ) which is synchronized. This is the feature that allows to generalize our results to predicates in temporal logic (or, more generally, to any behaviour specified as a sequence of tuples of states). In the case of Theorem 15, this is not true. However, we can still give a finite state algorithm which provides all processors with a synchronized clock with at least K values, where K is a given constant. This is sufficient in order to self-stabilize to any finite-state behaviour to which the network can self-stabilize, since it must be ultimately cyclic. A description of the algorithm will be given in the full paper; the main idea is to exploit the differences in the values of the clocks in order to estimate the size of the network, until stabilization (the clocks are updated with a standard catch-down technique). We execute the algorithm described in Theorem 15, but if we obtain a candidate minimum base in which the local identifiers induced by the clocks are not all equal, we increase the guess m and consequently the number of clock values to K m 2. If we stabilize to a minimum base in which all local identifiers induced by the clocks are the same, it is certainly the minimum base of the network (since the clocks play no rôle); moreover, the clocks must be synchronized.

13 10 Self-stabilization: the asynchronous case In this section we briefly sketch the ideas behind our results in the asynchronous case. At every step, any set of enabled processors can now be activated at the same time. Thus, a computation is given by an initial state and by a sequence A 0, A 1,... of activated processors (which of course must be enabled). By convention, the state of the network at time t is the state just before the processors in A t are activated. Thus, the state at time 0 is the initial state of the network. We denote with # i (t) the number of times the processor i has been activated at time t, i.e., # i (t) = {t i A t, t < t}. Theorem 16. Let C be a class of asynchronous networks, and P a uniform predicate on X. Then there is a program which self-stabilizes to P in O( n 2 ) steps on every network of C with n nodes and diameter. We associate a synchronizing catch-up clock C i to each processor, and we stipulate that a processor is not enabled unless for all in-neighbours j we have C j C i. After an activation, we set C i max j i C j + 1. The only property of the clock we shall use is the following one Lemma 17. At every time t, # j (t) # i (t) 1 for all in-neighbours j of i. Moreover, if i is enabled at time t then # j (t) # i (t). Proof. Note that between each two consecutive activations of the same processor, all its neighbours must be activated at least once. This implies the first part of the statement. For the second part, consider an activation with A t = {i}; we have # i (t) = # i (t + 1) 1 # j (t + 1) = # j (t). As in the synchronous case, the correctness level of each processor increases with time; however, the exact number of correct levels now depends on the number of activations: letting c i (t) = max{k T i (t) k = G i k}, and c(t) = min i c i (t), we have Lemma 18. c i (t) c(0) + # i (t). Proof. By induction on t. The base case is trivial. If i A t, the claim is true by induction; otherwise, using Lemma 17 we have c i (t + 1) min j=i, j i c j(t) + 1 min j=i, j i c(0) + # j(t) + 1 c(0) + # i (t) + 1 = c(0) + # i (t + 1). This proves that the number of correct levels ultimately increases. In order to prove the convergence of our algorithm, we introduce the net correctness level c i (t) = c i (t) # i (t). Correspondingly, we have the minimized version c(t) = min i c i (t). Note that c(0) = c(0). Finally, we say that a processor i is perfect at time t iff h(t i (t)) = c(0) + # i (t).

14 Lemma 19. The following properties hold: 1. c(t) is a nondecreasing function. 2. If for all in-neighbours j of i we have c i (t) c j (t), and i is enabled at time t, then also c i (t) c j (t). 3. If c i (t) = c(t) and i A t then c i (t + 1) = c(t + 1). 4. c(t) is a constant function. 5. If c i (t) = c(t) and i A t then i is perfect at time t + 1. Proof. (1). If i A t then c i (t + 1) = c i (t) c(t). If instead i A t, using Lemma 17 we have c i (t + 1) = c i (t + 1) # i (t + 1) Thus, for all i we obtain c i (t + 1) c(t). (2). In this case, Lemma 17 implies (3). If i A t, then = c i (t + 1) # i (t) 1 min j=i, j i (c j(t) + 1) # i (t) 1 = min j=i, j i (c j(t) + # j (t)) # i (t) min j=i, j i (c j(t) + # i (t)) # i (t) = min j=i, j i c j(t) c(t). c i (t) = c i (t) + # i (t) c j (t) + # j (t) = c j (t). c(t + 1) c i (t + 1) = c i (t) = c(t) c(t + 1). (4). By (3), c(t) can increase only if a processor minimizing c i (t) is activated. However, in this case by (2) we have c i (t + 1) = c i (t + 1) # i (t + 1) = c i (t) + 1 # i (t) 1 = c(t). (5). Just note that by (2) h(t i (t + 1)) = c i (t) + 1 = c i (t) + # i (t + 1) = c(0) + # i (t + 1). Now we are in the position to prove that Lemma 20. A perfect processor has a correct tree; moreover, perfection is stable.

15 Proof. By Lemma 18, c(0) + # i (t) = h(t i (t)) c i (t) c(0) + # i (t). Finally, note that in the equation h(t i (t)) = c(0) + # i (t) whenever i is activated both the left side (by Lemma 18) and the right side grow by one. If i is not activated, both sides maintain their value. The correctness and convergence proof now follows by noting that a processor minimizing c i (t) retains this property until activated, and then becomes perfect. Moreover, during any scheduling the following statement must hold: Lemma 21. After (k + )n steps, all processors have been activated at least k times. Thus, in ( + 1)n steps all processors have been activated, and so in ( + 1)n 2 steps all processors have been activated in every possible order. This implies that they are all perfect, and since P is uniform, the knowledge of the minimum base is sufficient for stabilization to P (as in Section 9). We conjecture that the algorithm really requires O(n 2 ) steps for quiescence. In the full paper an (n 2 ) lower bound will be proved by considering predicates in the future fragment of temporal logic. 11 Conclusions We have exhibited a series of (preliminary) results about a theory of universal selfstabilizing algorithms. Our aim is to factor out of a self-stabilization problem the coordination part, showing that it can always be reduced to a single algorithm (much like a universal Turing machine factors out all the computational power of recursive functions). The results are not complete, and we would like to highlight the most important open problems, and to sketch some additional results which we have omitted for lack of space. The asynchronous algorithm we described in Section 10 can self-stabilize to uniform predicates, but it is easy to see that there are more predicates to which an asynchronous network can self-stabilize. We conjecture that our algorithm is universal, but we should firstly characterize the asynchronously computable predicates. The algorithm can be of course made finite-state with the same techniques of Section 9, and it becomes universal for predicates (but the characterization problem rises again when behaviours depending on time are considered). We remark that the whole theory can be applied to interleaved networks, in which a central daemon chooses a single processor to be activated among the enabled ones. Essentially, one just considers only discrete fibrations in the characterization theorems (a fibration is discrete iff its fibres have singletons as strongly connected components). The self-stabilizing algorithms are then extended following the ideas of [4].

16 References 1. Yehuda Afek, Shay Kutten, and Moti Yung. Memory-efficient self-stabilizing protocols for general networks. In Proc. of the International Workshop on Distributed Algorithms, number 486 in LNCS, pages Springer-Verlag, Dana Angluin. Global and local properties in networks of processors. In Proc. 12th Symposium on the Theory of Computing, pages 82 93, Baruch Awerbuch, Boaz Patt-Shamir, and George Varghese. Self-stabilization by local checking and correction. In Proc. 32nd Symposium on Foundations of Computer Science, pages , Paolo Boldi, Bruno Codenotti, Peter Gemmell, Shella Shammah, Janos Simon, and Sebastiano Vigna. Symmetry breaking in anonymous networks: Characterizations. In Proc. 4th Israeli Symposium on Theory of Computing and Systems. IEEE Press, Paolo Boldi and Sebastiano Vigna. Graph fibrations. Preprint, Paolo Boldi and Sebastiano Vigna. Computing vector functions on anonymous networks. In Structure, Information and Communication Complexity. Proc. 4th Colloquium SIROCCO 97, International Informatics Series. Carleton University Press, To appear. An abstract appeared also as a Brief Announcement in Proc. PODC James E. Burns, Mohamed G. Gouda, and Raymond E. Miller. Stabilization and pseudostabilization. Distributed Computing, 7:35 42, E.W. Dijkstra. Self-stabilizing systems in spite of distributed control. CACM, 17(11): , Shmuel Katz and Kenneth J. Perry. Self-stabilizing extensions for message-passing systems. Distributed Computing, 7:17 26, Nancy Norris. Universal covers of graphs: Isomorphism to depth n 1 implies isomorphism to all depths. Discrete Applied Mathematics, 56:61 74, Masafumi Yamashita and Tiko Kameda. Computing on anonymous networks. In Proc. of the 4th PODC, pages 13 22, Masafumi Yamashita and Tiko Kameda. Computing functions on asynchronous anonymous networks. Math. Systems Theory, 29: , This article was processed using the LATEX 2ε macro package with SIROCCO class

Holographic Trees. 1 Introduction. Paolo Boldi 1 and Sebastiano Vigna 1

Holographic Trees. 1 Introduction. Paolo Boldi 1 and Sebastiano Vigna 1 Holographic Trees Paolo Boldi and Sebastiano Vigna Dipartimento di Scienze dell Informazione, Università degli Studi di Milano, Via Comelico 39/4, I-035 Milano MI, Italy. {vigna,boldi}@dsi.unimi.it Abstract

More information

1 Introduction. 1.1 The Problem Domain. Self-Stablization UC Davis Earl Barr. Lecture 1 Introduction Winter 2007

1 Introduction. 1.1 The Problem Domain. Self-Stablization UC Davis Earl Barr. Lecture 1 Introduction Winter 2007 Lecture 1 Introduction 1 Introduction 1.1 The Problem Domain Today, we are going to ask whether a system can recover from perturbation. Consider a children s top: If it is perfectly vertically, you can

More information

Fibrations of Graphs

Fibrations of Graphs Fibrations of Graphs Paolo Boldi Sebastiano Vigna Abstract A fibration of graphs is a morphism that is a local isomorphism of in-neighbourhoods, much in the same way a covering projection is a local isomorphism

More information

Symmetric Rendezvous in Graphs: Deterministic Approaches

Symmetric Rendezvous in Graphs: Deterministic Approaches Symmetric Rendezvous in Graphs: Deterministic Approaches Shantanu Das Technion, Haifa, Israel http://www.bitvalve.org/~sdas/pres/rendezvous_lorentz.pdf Coauthors: Jérémie Chalopin, Adrian Kosowski, Peter

More information

A Self-Stabilizing Minimal Dominating Set Algorithm with Safe Convergence

A Self-Stabilizing Minimal Dominating Set Algorithm with Safe Convergence A Self-Stabilizing Minimal Dominating Set Algorithm with Safe Convergence Hirotsugu Kakugawa and Toshimitsu Masuzawa Department of Computer Science Graduate School of Information Science and Technology

More information

6.852: Distributed Algorithms Fall, Class 24

6.852: Distributed Algorithms Fall, Class 24 6.852: Distributed Algorithms Fall, 2009 Class 24 Today s plan Self-stabilization Self-stabilizing algorithms: Breadth-first spanning tree Mutual exclusion Composing self-stabilizing algorithms Making

More information

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Generalized Pigeonhole Properties of Graphs and Oriented Graphs Europ. J. Combinatorics (2002) 23, 257 274 doi:10.1006/eujc.2002.0574 Available online at http://www.idealibrary.com on Generalized Pigeonhole Properties of Graphs and Oriented Graphs ANTHONY BONATO, PETER

More information

arxiv: v1 [cs.dc] 3 Oct 2011

arxiv: v1 [cs.dc] 3 Oct 2011 A Taxonomy of aemons in Self-Stabilization Swan ubois Sébastien Tixeuil arxiv:1110.0334v1 cs.c] 3 Oct 2011 Abstract We survey existing scheduling hypotheses made in the literature in self-stabilization,

More information

On the Effectiveness of Symmetry Breaking

On the Effectiveness of Symmetry Breaking On the Effectiveness of Symmetry Breaking Russell Miller 1, Reed Solomon 2, and Rebecca M Steiner 3 1 Queens College and the Graduate Center of the City University of New York Flushing NY 11367 2 University

More information

Locality Lower Bounds

Locality Lower Bounds Chapter 8 Locality Lower Bounds In Chapter 1, we looked at distributed algorithms for coloring. In particular, we saw that rings and rooted trees can be colored with 3 colors in log n + O(1) rounds. 8.1

More information

Copyright 2013 Springer Science+Business Media New York

Copyright 2013 Springer Science+Business Media New York Meeks, K., and Scott, A. (2014) Spanning trees and the complexity of floodfilling games. Theory of Computing Systems, 54 (4). pp. 731-753. ISSN 1432-4350 Copyright 2013 Springer Science+Business Media

More information

Rendezvous and Election of Mobile Agents: Impact of Sense of Direction

Rendezvous and Election of Mobile Agents: Impact of Sense of Direction Rendezvous and Election of Mobile Agents: Impact of Sense of Direction Lali Barrière Paola Flocchini Pierre Fraigniaud Nicola Santoro Abstract Consider a collection of r identical asynchronous mobile agents

More information

Generating p-extremal graphs

Generating p-extremal graphs Generating p-extremal graphs Derrick Stolee Department of Mathematics Department of Computer Science University of Nebraska Lincoln s-dstolee1@math.unl.edu August 2, 2011 Abstract Let f(n, p be the maximum

More information

Fault-Tolerant Consensus

Fault-Tolerant Consensus Fault-Tolerant Consensus CS556 - Panagiota Fatourou 1 Assumptions Consensus Denote by f the maximum number of processes that may fail. We call the system f-resilient Description of the Problem Each process

More information

The maximum edge-disjoint paths problem in complete graphs

The maximum edge-disjoint paths problem in complete graphs Theoretical Computer Science 399 (2008) 128 140 www.elsevier.com/locate/tcs The maximum edge-disjoint paths problem in complete graphs Adrian Kosowski Department of Algorithms and System Modeling, Gdańsk

More information

Petri nets. s 1 s 2. s 3 s 4. directed arcs.

Petri nets. s 1 s 2. s 3 s 4. directed arcs. Petri nets Petri nets Petri nets are a basic model of parallel and distributed systems (named after Carl Adam Petri). The basic idea is to describe state changes in a system with transitions. @ @R s 1

More information

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Massimo Franceschet Angelo Montanari Dipartimento di Matematica e Informatica, Università di Udine Via delle

More information

Distributed Systems Byzantine Agreement

Distributed Systems Byzantine Agreement Distributed Systems Byzantine Agreement He Sun School of Informatics University of Edinburgh Outline Finish EIG algorithm for Byzantine agreement. Number-of-processors lower bound for Byzantine agreement.

More information

ORBITAL DIGRAPHS OF INFINITE PRIMITIVE PERMUTATION GROUPS

ORBITAL DIGRAPHS OF INFINITE PRIMITIVE PERMUTATION GROUPS ORBITAL DIGRAPHS OF INFINITE PRIMITIVE PERMUTATION GROUPS SIMON M. SMITH Abstract. If G is a group acting on a set Ω and α, β Ω, the digraph whose vertex set is Ω and whose arc set is the orbit (α, β)

More information

A Time Optimal Self-Stabilizing Synchronizer Using A Phase Clock

A Time Optimal Self-Stabilizing Synchronizer Using A Phase Clock A Time Optimal Self-Stabilizing Synchronizer Using A Phase Clock Baruch Awerbuch Shay Kutten Yishay Mansour Boaz Patt-Shamir George Varghese December, 006 Abstract A synchronizer with a phase counter (sometimes

More information

Snap-Stabilizing PIF and Useless Computations

Snap-Stabilizing PIF and Useless Computations Snap-Stabilizing PIF and Useless Computations Alain Cournier Stéphane Devismes Vincent Villain LaRIA, CNRS FRE 2733 Université de Picardie Jules Verne, Amiens (France) Abstract A snap-stabilizing protocol,

More information

δ-approximable Functions

δ-approximable Functions δ-approximable Functions Charles Meyssonnier 1, Paolo Boldi 2, and Sebastiano Vigna 2 1 École Normale Supérieure de Lyon, France 2 Dipartimento di Scienze dell Informazione, Università degli Studi di Milano,

More information

Consistent Fixed Points and Negative Gain

Consistent Fixed Points and Negative Gain 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies Consistent Fixed Points and Negative Gain H. B. Acharya The University of Texas at Austin acharya @ cs.utexas.edu

More information

Testing Equality in Communication Graphs

Testing Equality in Communication Graphs Electronic Colloquium on Computational Complexity, Report No. 86 (2016) Testing Equality in Communication Graphs Noga Alon Klim Efremenko Benny Sudakov Abstract Let G = (V, E) be a connected undirected

More information

The Las-Vegas Processor Identity Problem (How and When to Be Unique)

The Las-Vegas Processor Identity Problem (How and When to Be Unique) The Las-Vegas Processor Identity Problem (How and When to Be Unique) Shay Kutten Department of Industrial Engineering The Technion kutten@ie.technion.ac.il Rafail Ostrovsky Bellcore rafail@bellcore.com

More information

Self-Stabilizing Silent Disjunction in an Anonymous Network

Self-Stabilizing Silent Disjunction in an Anonymous Network Self-Stabilizing Silent Disjunction in an Anonymous Network Ajoy K. Datta 1, Stéphane Devismes 2, and Lawrence L. Larmore 1 1 Department of Computer Science, University of Nevada Las Vegas, USA 2 VERIMAG

More information

Connectivity and tree structure in finite graphs arxiv: v5 [math.co] 1 Sep 2014

Connectivity and tree structure in finite graphs arxiv: v5 [math.co] 1 Sep 2014 Connectivity and tree structure in finite graphs arxiv:1105.1611v5 [math.co] 1 Sep 2014 J. Carmesin R. Diestel F. Hundertmark M. Stein 20 March, 2013 Abstract Considering systems of separations in a graph

More information

arxiv: v1 [math.co] 28 Oct 2016

arxiv: v1 [math.co] 28 Oct 2016 More on foxes arxiv:1610.09093v1 [math.co] 8 Oct 016 Matthias Kriesell Abstract Jens M. Schmidt An edge in a k-connected graph G is called k-contractible if the graph G/e obtained from G by contracting

More information

Realization Plans for Extensive Form Games without Perfect Recall

Realization Plans for Extensive Form Games without Perfect Recall Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in

More information

arxiv: v1 [math.co] 4 Jan 2018

arxiv: v1 [math.co] 4 Jan 2018 A family of multigraphs with large palette index arxiv:80.0336v [math.co] 4 Jan 208 M.Avesani, A.Bonisoli, G.Mazzuoccolo July 22, 208 Abstract Given a proper edge-coloring of a loopless multigraph, the

More information

Can an Operation Both Update the State and Return a Meaningful Value in the Asynchronous PRAM Model?

Can an Operation Both Update the State and Return a Meaningful Value in the Asynchronous PRAM Model? Can an Operation Both Update the State and Return a Meaningful Value in the Asynchronous PRAM Model? Jaap-Henk Hoepman Department of Computer Science, University of Twente, the Netherlands hoepman@cs.utwente.nl

More information

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Publication date 2014 Copyright 2014 Melinda Tóth, Zoltán Horváth Supported by TÁMOP-412A/1-11/1-2011-0052

More information

Some hard families of parameterised counting problems

Some hard families of parameterised counting problems Some hard families of parameterised counting problems Mark Jerrum and Kitty Meeks School of Mathematical Sciences, Queen Mary University of London {m.jerrum,k.meeks}@qmul.ac.uk September 2014 Abstract

More information

On the Resilience and Uniqueness of CPA for Secure Broadcast

On the Resilience and Uniqueness of CPA for Secure Broadcast On the Resilience and Uniqueness of CPA for Secure Broadcast Chris Litsas, Aris Pagourtzis, Giorgos Panagiotakos and Dimitris Sakavalas School of Electrical and Computer Engineering National Technical

More information

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Massimo Franceschet Angelo Montanari Dipartimento di Matematica e Informatica, Università di Udine Via delle

More information

On Equilibria of Distributed Message-Passing Games

On Equilibria of Distributed Message-Passing Games On Equilibria of Distributed Message-Passing Games Concetta Pilotto and K. Mani Chandy California Institute of Technology, Computer Science Department 1200 E. California Blvd. MC 256-80 Pasadena, US {pilotto,mani}@cs.caltech.edu

More information

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 3 1 Terminology For any complexity class C, we define the class coc as follows: coc def = { L L C }. One class

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Proving Completeness for Nested Sequent Calculi 1

Proving Completeness for Nested Sequent Calculi 1 Proving Completeness for Nested Sequent Calculi 1 Melvin Fitting abstract. Proving the completeness of classical propositional logic by using maximal consistent sets is perhaps the most common method there

More information

Computational Models for Wireless Sensor Networks: A Survey

Computational Models for Wireless Sensor Networks: A Survey Computational Models for Wireless Sensor Networks: A Survey Apostolos Filippas 2, Stavros Nikolaou 2, Andreas Pavlogiannis 2, Othon Michail 1,2, Ioannis Chatzigiannakis 1,2, and Paul G. Spirakis 1,2 1

More information

Automorphism groups of wreath product digraphs

Automorphism groups of wreath product digraphs Automorphism groups of wreath product digraphs Edward Dobson Department of Mathematics and Statistics Mississippi State University PO Drawer MA Mississippi State, MS 39762 USA dobson@math.msstate.edu Joy

More information

Isomorphisms between pattern classes

Isomorphisms between pattern classes Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.

More information

Decomposing planar cubic graphs

Decomposing planar cubic graphs Decomposing planar cubic graphs Arthur Hoffmann-Ostenhof Tomáš Kaiser Kenta Ozeki Abstract The 3-Decomposition Conjecture states that every connected cubic graph can be decomposed into a spanning tree,

More information

Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation

Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation Viveck R. Cadambe EE Department, Pennsylvania State University, University Park, PA, USA viveck@engr.psu.edu Nancy Lynch

More information

Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast. Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas

Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast. Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas Slides are partially based on the joint work of Christos Litsas, Aris Pagourtzis,

More information

The fundamental group of a locally finite graph with ends

The fundamental group of a locally finite graph with ends 1 The fundamental group of a locally finite graph with ends Reinhard Diestel and Philipp Sprüssel Abstract We characterize the fundamental group of a locally finite graph G with ends combinatorially, as

More information

Fast self-stabilizing k-independent dominating set construction Labri Technical Report RR

Fast self-stabilizing k-independent dominating set construction Labri Technical Report RR Fast self-stabilizing k-independent dominating set construction Labri Technical Report RR-1472-13 Colette Johnen Univ. Bordeaux, LaBRI, UMR 5800, F-33400 Talence, France Abstract. We propose a fast silent

More information

A Self-Stabilizing Distributed Approximation Algorithm for the Minimum Connected Dominating Set

A Self-Stabilizing Distributed Approximation Algorithm for the Minimum Connected Dominating Set A Self-Stabilizing Distributed Approximation Algorithm for the Minimum Connected Dominating Set Sayaka Kamei and Hirotsugu Kakugawa Tottori University of Environmental Studies Osaka University Dept. of

More information

Ability to Count Messages Is Worth Θ( ) Rounds in Distributed Computing

Ability to Count Messages Is Worth Θ( ) Rounds in Distributed Computing Ability to Count Messages Is Worth Θ( ) Rounds in Distributed Computing Tuomo Lempiäinen Aalto University, Finland LICS 06 July 7, 06 @ New York / 0 Outline Introduction to distributed computing Different

More information

Posets, homomorphisms and homogeneity

Posets, homomorphisms and homogeneity Posets, homomorphisms and homogeneity Peter J. Cameron and D. Lockett School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, U.K. Abstract Jarik Nešetřil suggested

More information

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Rachid Guerraoui Petr Kouznetsov Distributed Programming Laboratory EPFL Abstract Recent papers [7, 9] define the weakest failure detector

More information

4-coloring P 6 -free graphs with no induced 5-cycles

4-coloring P 6 -free graphs with no induced 5-cycles 4-coloring P 6 -free graphs with no induced 5-cycles Maria Chudnovsky Department of Mathematics, Princeton University 68 Washington Rd, Princeton NJ 08544, USA mchudnov@math.princeton.edu Peter Maceli,

More information

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)} Preliminaries Graphs G = (V, E), V : set of vertices E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) 1 2 3 5 4 V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)} 1 Directed Graph (Digraph)

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 1709-1718 (2008) A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JI-CHERNG LIN, TETZ C. HUANG, CHENG-PIN

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

Expressing Security Properties Using Selective Interleaving Functions

Expressing Security Properties Using Selective Interleaving Functions Expressing Security Properties Using Selective Interleaving Functions Joseph Halpern and Sabina Petride August 8, 2008 Abstract McLean s notion of Selective Interleaving Functions (SIFs) is perhaps the

More information

Chordal Coxeter Groups

Chordal Coxeter Groups arxiv:math/0607301v1 [math.gr] 12 Jul 2006 Chordal Coxeter Groups John Ratcliffe and Steven Tschantz Mathematics Department, Vanderbilt University, Nashville TN 37240, USA Abstract: A solution of the isomorphism

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

An Optimal Lower Bound for Nonregular Languages

An Optimal Lower Bound for Nonregular Languages An Optimal Lower Bound for Nonregular Languages Alberto Bertoni Carlo Mereghetti Giovanni Pighizzini Dipartimento di Scienze dell Informazione Università degli Studi di Milano via Comelico, 39 2035 Milano

More information

Online Learning, Mistake Bounds, Perceptron Algorithm

Online Learning, Mistake Bounds, Perceptron Algorithm Online Learning, Mistake Bounds, Perceptron Algorithm 1 Online Learning So far the focus of the course has been on batch learning, where algorithms are presented with a sample of training data, from which

More information

Utilising public information in Network Coding

Utilising public information in Network Coding 1 Utilising public information in Network Coding Søren Riis Queen Mary, University of London (Technical report June 2005) Abstract We show that an information network flow problem N in which n messages

More information

Section 6 Fault-Tolerant Consensus

Section 6 Fault-Tolerant Consensus Section 6 Fault-Tolerant Consensus CS586 - Panagiota Fatourou 1 Description of the Problem Consensus Each process starts with an individual input from a particular value set V. Processes may fail by crashing.

More information

3.1 Asymptotic notation

3.1 Asymptotic notation 3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N = {0, 1, 2,... Such

More information

Tuples of Disjoint NP-Sets

Tuples of Disjoint NP-Sets Tuples of Disjoint NP-Sets (Extended Abstract) Olaf Beyersdorff Institut für Informatik, Humboldt-Universität zu Berlin, 10099 Berlin, Germany beyersdo@informatik.hu-berlin.de Abstract. Disjoint NP-pairs

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and

More information

Course : Algebraic Combinatorics

Course : Algebraic Combinatorics Course 18.312: Algebraic Combinatorics Lecture Notes #29-31 Addendum by Gregg Musiker April 24th - 29th, 2009 The following material can be found in several sources including Sections 14.9 14.13 of Algebraic

More information

The priority promotion approach to parity games

The priority promotion approach to parity games The priority promotion approach to parity games Massimo Benerecetti 1, Daniele Dell Erba 1, and Fabio Mogavero 2 1 Università degli Studi di Napoli Federico II 2 Università degli Studi di Verona Abstract.

More information

6.852: Distributed Algorithms Fall, Class 10

6.852: Distributed Algorithms Fall, Class 10 6.852: Distributed Algorithms Fall, 2009 Class 10 Today s plan Simulating synchronous algorithms in asynchronous networks Synchronizers Lower bound for global synchronization Reading: Chapter 16 Next:

More information

Vertex opposition in spherical buildings

Vertex opposition in spherical buildings Vertex opposition in spherical buildings Anna Kasikova and Hendrik Van Maldeghem Abstract We study to which extent all pairs of opposite vertices of self-opposite type determine a given building. We provide

More information

Ma/CS 117c Handout # 5 P vs. NP

Ma/CS 117c Handout # 5 P vs. NP Ma/CS 117c Handout # 5 P vs. NP We consider the possible relationships among the classes P, NP, and co-np. First we consider properties of the class of NP-complete problems, as opposed to those which are

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can

More information

8 (0,1,3,0,1) (3,0,1,3,0) 5 9 (1,3,0,1,3) (3,0,1,3,0) 6 (1,3,0,1,3) (3,0,1,3,0) 13 (3,0,1,3,0) leader

8 (0,1,3,0,1) (3,0,1,3,0) 5 9 (1,3,0,1,3) (3,0,1,3,0) 6 (1,3,0,1,3) (3,0,1,3,0) 13 (3,0,1,3,0) leader Deterministic, silence and self-stabilizing leader election algorithm on id-based ring Colette Johnen L.R.I./C.N.R.S., Universite de Paris-Sud Bat. 490, Campus d'orsay F-91405 Orsay Cedex, France. phone

More information

Finding k disjoint paths in a directed planar graph

Finding k disjoint paths in a directed planar graph Finding k disjoint paths in a directed planar graph Alexander Schrijver CWI Kruislaan 413 1098 SJ Amsterdam The Netherlands and Department of Mathematics University of Amsterdam Plantage Muidergracht 24

More information

Time Optimal Asynchronous Self-stabilizing Spanning Tree

Time Optimal Asynchronous Self-stabilizing Spanning Tree Time Optimal Asynchronous Self-stabilizing Spanning Tree Janna Burman and Shay Kutten Dept. of Industrial Engineering & Management Technion, Haifa 32000, Israel. bjanna@tx.technion.ac.il, kutten@ie.technion.ac.il

More information

T -choosability in graphs

T -choosability in graphs T -choosability in graphs Noga Alon 1 Department of Mathematics, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv, Israel. and Ayal Zaks 2 Department of Statistics and

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Perfect matchings in highly cyclically connected regular graphs

Perfect matchings in highly cyclically connected regular graphs Perfect matchings in highly cyclically connected regular graphs arxiv:1709.08891v1 [math.co] 6 Sep 017 Robert Lukot ka Comenius University, Bratislava lukotka@dcs.fmph.uniba.sk Edita Rollová University

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

THE STRUCTURE OF 3-CONNECTED MATROIDS OF PATH WIDTH THREE

THE STRUCTURE OF 3-CONNECTED MATROIDS OF PATH WIDTH THREE THE STRUCTURE OF 3-CONNECTED MATROIDS OF PATH WIDTH THREE RHIANNON HALL, JAMES OXLEY, AND CHARLES SEMPLE Abstract. A 3-connected matroid M is sequential or has path width 3 if its ground set E(M) has a

More information

Unmixed Graphs that are Domains

Unmixed Graphs that are Domains Unmixed Graphs that are Domains Bruno Benedetti Institut für Mathematik, MA 6-2 TU Berlin, Germany benedetti@math.tu-berlin.de Matteo Varbaro Dipartimento di Matematica Univ. degli Studi di Genova, Italy

More information

1 Introduction We adopt the terminology of [1]. Let D be a digraph, consisting of a set V (D) of vertices and a set E(D) V (D) V (D) of edges. For a n

1 Introduction We adopt the terminology of [1]. Let D be a digraph, consisting of a set V (D) of vertices and a set E(D) V (D) V (D) of edges. For a n HIGHLY ARC-TRANSITIVE DIGRAPHS WITH NO HOMOMORPHISM ONTO Z Aleksander Malnic 1 Dragan Marusic 1 IMFM, Oddelek za matematiko IMFM, Oddelek za matematiko Univerza v Ljubljani Univerza v Ljubljani Jadranska

More information

The complexity of recursive constraint satisfaction problems.

The complexity of recursive constraint satisfaction problems. The complexity of recursive constraint satisfaction problems. Victor W. Marek Department of Computer Science University of Kentucky Lexington, KY 40506, USA marek@cs.uky.edu Jeffrey B. Remmel Department

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

Mediated Population Protocols

Mediated Population Protocols Othon Michail Paul Spirakis Ioannis Chatzigiannakis Research Academic Computer Technology Institute (RACTI) July 2009 Michail, Spirakis, Chatzigiannakis 1 / 25 Outline I Population Protocols 1 Population

More information

The Multi-Agent Rendezvous Problem - The Asynchronous Case

The Multi-Agent Rendezvous Problem - The Asynchronous Case 43rd IEEE Conference on Decision and Control December 14-17, 2004 Atlantis, Paradise Island, Bahamas WeB03.3 The Multi-Agent Rendezvous Problem - The Asynchronous Case J. Lin and A.S. Morse Yale University

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R

THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R YIDA GAO, MATT REDMOND, ZACH STEWARD Abstract. The n-value game is a dynamical system defined by a method of iterated differences. In this paper, we

More information

Löwenheim-Skolem Theorems, Countable Approximations, and L ω. David W. Kueker (Lecture Notes, Fall 2007)

Löwenheim-Skolem Theorems, Countable Approximations, and L ω. David W. Kueker (Lecture Notes, Fall 2007) Löwenheim-Skolem Theorems, Countable Approximations, and L ω 0. Introduction David W. Kueker (Lecture Notes, Fall 2007) In its simplest form the Löwenheim-Skolem Theorem for L ω1 ω states that if σ L ω1

More information

Cone Avoidance of Some Turing Degrees

Cone Avoidance of Some Turing Degrees Journal of Mathematics Research; Vol. 9, No. 4; August 2017 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Cone Avoidance of Some Turing Degrees Patrizio Cintioli

More information

On non-hamiltonian circulant digraphs of outdegree three

On non-hamiltonian circulant digraphs of outdegree three On non-hamiltonian circulant digraphs of outdegree three Stephen C. Locke DEPARTMENT OF MATHEMATICAL SCIENCES, FLORIDA ATLANTIC UNIVERSITY, BOCA RATON, FL 33431 Dave Witte DEPARTMENT OF MATHEMATICS, OKLAHOMA

More information

The Weakest Failure Detector to Solve Mutual Exclusion

The Weakest Failure Detector to Solve Mutual Exclusion The Weakest Failure Detector to Solve Mutual Exclusion Vibhor Bhatt Nicholas Christman Prasad Jayanti Dartmouth College, Hanover, NH Dartmouth Computer Science Technical Report TR2008-618 April 17, 2008

More information

AN ALGORITHM FOR CONSTRUCTING A k-tree FOR A k-connected MATROID

AN ALGORITHM FOR CONSTRUCTING A k-tree FOR A k-connected MATROID AN ALGORITHM FOR CONSTRUCTING A k-tree FOR A k-connected MATROID NICK BRETTELL AND CHARLES SEMPLE Dedicated to James Oxley on the occasion of his 60th birthday Abstract. For a k-connected matroid M, Clark

More information

The Lefthanded Local Lemma characterizes chordal dependency graphs

The Lefthanded Local Lemma characterizes chordal dependency graphs The Lefthanded Local Lemma characterizes chordal dependency graphs Wesley Pegden March 30, 2012 Abstract Shearer gave a general theorem characterizing the family L of dependency graphs labeled with probabilities

More information

Lecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004

Lecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004 CMSC 858K Advanced Topics in Cryptography February 5, 2004 Lecturer: Jonathan Katz Lecture 4 Scribe(s): Chiu Yuen Koo Nikolai Yakovenko Jeffrey Blank 1 Summary The focus of this lecture is efficient public-key

More information

1 Recap: Interactive Proofs

1 Recap: Interactive Proofs Theoretical Foundations of Cryptography Lecture 16 Georgia Tech, Spring 2010 Zero-Knowledge Proofs 1 Recap: Interactive Proofs Instructor: Chris Peikert Scribe: Alessio Guerrieri Definition 1.1. An interactive

More information

CS505: Distributed Systems

CS505: Distributed Systems Cristina Nita-Rotaru CS505: Distributed Systems. Required reading for this topic } Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson for "Impossibility of Distributed with One Faulty Process,

More information

Appendix A. Formal Proofs

Appendix A. Formal Proofs Distributed Reasoning for Multiagent Simple Temporal Problems Appendix A Formal Proofs Throughout this paper, we provided proof sketches to convey the gist of the proof when presenting the full proof would

More information

5 Set Operations, Functions, and Counting

5 Set Operations, Functions, and Counting 5 Set Operations, Functions, and Counting Let N denote the positive integers, N 0 := N {0} be the non-negative integers and Z = N 0 ( N) the positive and negative integers including 0, Q the rational numbers,

More information