A Superstabilizing log(n)-approximation Algorithm for Dynamic Steiner Trees

Size: px
Start display at page:

Download "A Superstabilizing log(n)-approximation Algorithm for Dynamic Steiner Trees"

Transcription

1 A Superstabilizing log(n)-approximation Algorithm for Dynamic Steiner Trees Lélia Blin c,b, Maria Potop-Butucaru a,b,, Stephane Rovedakis d, a Univ. Pierre & Marie Curie - Paris 6, b LIP6-CNRS UMR 7606, France c Univ. Evry Val d Essonnes, Bd Franois Mitterrand, Evry, France d Laboratoire CEDRIC, CNAM, 292 Rue St Martin, Paris, France Abstract This paper proposes a fully dynamic self-stabilizing algorithm for the dynamic Steiner tree problem. The Steiner tree problem aims at constructing a Minimum Spanning Tree (MST) over a subset of nodes called Steiner members, or Steiner group usually denoted S. Steiner trees are good candidates to efficiently implement communication primitives such as publish/subscribe or multicast, essential building blocks in the design of middleware architectures for the new emergent networks (e.g., P2P, sensor or adhoc networks). Our algorithm returns a log( S )-approximation of the optimal Steiner tree. It improves over existing solutions in several ways. First, it is fully dynamic, in other words it withstands the dynamism when both the group members and ordinary nodes can join or leave the network. Next, our algorithm is self-stabilizing, that is, it copes with nodes memory corruption. Last but not least, our algorithm is superstabilizing. That is, while converging to a correct configuration (i.e., a Steiner tree) after a modification of the network, it keeps offering the Steiner tree service during the stabilization time to all members that have not been affected by this modification. Keywords: Self-stabilization, dynamic Steiner tree, multicast, message passing networks 1. Introduction The design of efficient distributed applications in the newly distributed emergent networks such as MANETs, P2P or sensor networks raises various challenges ranging from models to fundamental services. These networks face frequent churn (nodes and links creation or destruction) and various privacy and A preliminary version of this work was published in proceedings of SSS 2009 [1]. Corresponding author Principal corresponding author addresses: lelia.blin@lip6.fr (Lélia Blin), maria.gradinariu@lip6.fr (Maria Potop-Butucaru), stephane.rovedakis@cnam.fr (Stephane Rovedakis) Preprint submitted to Elsevier June 7, 2013

2 security attacks that cannot be easily encapsulated in the existing distributed models. Therefore, new models and new algorithms have to be designed. Communication services are the building blocks for any distributed system and they have received a particular attention in the lately years. Their efficiency greatly depends on the performance of the underlying routing overlay. These overlays should be optimized to reduce the network overload. Moreover, in order to avoid security and privacy attacks the number of network nodes that are used only for the overlay connectivity have to be minimized. Additionally, the overlays have to offer some quality of services while nodes or links fail. The work in designing optimized communication overlays for the new emergent networks has been conducted in both structured (DHT-based) and unstructured networks. Communication primitives using DHT-based schemes such as Pastry, CAN or Chord [2] build upon a global naming scheme based on hashing nodes identifiers. These schemes are optimized to efficiently route in the virtual name space. However, they have weak energy performances in MANETs or sensor networks where the maintenance of long links reduces the network perennial. Therefore, alternative strategies [3], mostly based on gossip techniques, have been recently considered. These schemes, highly efficient when nodes have no information on the content and the topology of the system, offer only probabilistic guarantees on the message delivery. In this paper we are interested in the study of overlays targeted to efficiently connect a group of nodes that are not necessarily located in the same geographical area (e.g., sensors that should communicate their sensed data to servers located outside the deployment area, P2P nodes that share the same interest and are located in different countries, robots that should participate to the same task but need to remotely coordinate). Steiner trees are good candidates to implement the above mentioned requirements since the problem have been designed for efficiently connect a subset of the network nodes, referred as Steiner members. The Steiner tree problem. The Steiner tree problem can be informally expressed as follows: given a weighted graph in which a subset S of nodes is identified, find a minimum-weight tree spanning S. The Steiner tree problem is one of the most important combinatorial optimization problems and finding a Steiner tree is NP-hard. A survey on different heuristics for constructing Steiner trees with different approximation ratios can be found in [4]. In our work we are interested in dynamic variants of Steiner trees first addressed in [5] in a centralized online setting. They propose a log S -approximation algorithm for this problem that copes only with Steiner member arrivals. At first step, a member becomes the root of the tree then at each step a new member is connected to the existing Steiner tree by a shortest path. This algorithm can be implemented in a decentralized environment (see [6]). Our work considers the fully dynamic version of the problem where both Steiner members and ordinary nodes or communication links can join or leave the system. Additionally, our work aims at providing a superstabilizing ap- 2

3 proximation of a Steiner tree. The property of self-stabilization [7, 8] enables a distributed algorithm to recover from a transient fault regardless of its initial state. The superstabilization [9] is an extension of the self-stabilization property for dynamic settings. The idea is to provide some minimal guarantees while the system repairs after a topology change. To our knowledge there are only two self-stabilizing approximations of Steiner trees [10, 11]. Both works assume the shared memory model and a unfair centralized scheduler. In [10] the authors propose a self-stabilizing algorithm based on a pruned minimum spanning tree. The computed solution has an approximation ratio of V S +1, where V is the set of nodes in the network. In [11], the authors proposed a four-layered algorithm which is built upon the techniques proposed in [12] to obtain a 2-approximation. The above cited algorithms are designed to work only for static networks. In [11] the members can become ordinary nodes and ordinary nodes can become members, but the network does not change. These algorithms could be used in dynamic networks, however to guarantee a 2-approximation after a topology change in the network the tree must be totally reconstructed in most cases because of the computation of a minimum spanning tree of the network. Therefore, each topology change induces a computation extra cost in the network to maintain a 2-approximated Steiner tree. Our results. We describe the first distributed super-stabilizing algorithm for the Steiner tree problem. This algorithm has the following novel properties with respect to the previous constructions: it is specially designed to cope with the system dynamism. In other words, our solution tolerates nodes (or links) join and leave the system, while using O(δ S log n) memory bits with δ the maximal degree of the network and n = V the network size (or O( S log n) in the classical message passing model, i.e., by considering only the memory size for local variables) 1. it s design includes self-stabilizing policies. Starting from an arbitrary state (nodes local memory corruption, counter program corruption, or erroneous messages in the network buffers), our algorithm is guaranteed to converge to a tree spanning the Steiner members whose weight is at most log( S ) times the weight of an optimal solution. Additionally, it is superstabilizing. That is, while a topology change occurs, i.e., during the restabilization period, the algorithm offers the guarantee that only the subtree connected through the crashed node/edge is reconstructed. Moreover, a log( S )-approximated Steiner tree is preserved when a topology change occurs in a legitimate configuration. 1 To solve the problem, one can use a self-stabilizing reset algorithm with a centralized algorithm computing a Steiner tree on each node but this requires at least O(n log n) memory bits because the map of the network have to be stored on each node. 3

4 Dynamicity Superstabilizing Self-Stabilizing Approximation [13] No No No 2 [10] No No Yes V + S 1 [11] No No Yes 2 This paper Yes Yes Yes log( S ) Table 1: Distributed (deterministic) algorithms for the Steiner tree problem. Table 1 summarizes our contribution compared to previous works. The approximation ratio of our algorithm is logarithmic, which is not as good as the 2-approximation of the algorithm proposed by Kamei and Kakugawa in [11]. However, this latter algorithm is not superstabilizing. Designing a superstabilizing 2-approximation algorithm for the Steiner tree problem is a challenge. Indeed, all known 2-approximation distributed algorithms (self-stabilizing or not) for the Steiner tree problem use a minimum spanning tree (MST) computation, and the design of a superstabilizing algorithm for MST is a challenge in itself. The paper is organized as follows. The two next sections introduce the model and the description of the approach proposed by Imase and Waxman for the dynamic Steiner tree problem. Section 4 presents the detailed description of our algorithm, and Section 5 prove the correctness of our algorithm and its superstabilizing ability. The last section of the paper resumes the main results and outlines some open problems. 2. Model and notations We consider an undirected weighted connected network G = (V, E, w) where V is the set of nodes, E is the set of edges and w : E R + is a positive cost function. Nodes represent processors and edges represent bidirectional communication links. Each node v in the network has a unique identifier, noted ID v. S V defines the set of members we have to connect. For any pair of nodes u, v V, we note d(u, v) the distance of the shortest path P (u, v) between u and v in G (i.e. d(u, v) = e P (u,v) w(e)). For a node v V, we denote the set of its neighbors N (v) = {u, (u, v) E}. A Steiner tree, T in G is a connected acyclic sub-graph of G such that T = (V T, E T ), S V T V and E T E. We denote by W (T ) the cost of a tree T, i.e. W (T ) = e T w(e). We consider an asynchronous communication message passing model with FIFO channels (on each link messages are delivered in the same order as they have been sent). A local state of a node is the value of the local variables of the node and the state of its program counter. We consider a fined-grained communication atomicity model [14, 8]. That is, each node maintains a local copy of the variables of its neighbors. These variables are refreshed via special messages (denoted in the sequel InfoMsg) exchanged periodically by neighboring nodes. A configuration of the system is the cross product of the local states of all nodes in the system 4

5 plus the content of the communication links. The transition from a configuration to the next one is produced by the execution of an atomic step at a node. An atomic step at node p is an internal computation based on the current value of p s local variables and a single communication operation (send/receive) at p. We assume a distributed weakly fair daemon. A computation of the system is defined as a weakly fair, maximal sequence of configurations, e = (c 0, c 1,... c i,...), where each configuration c i+1 follows from c i by the execution of a single action of at least one node. During an execution step, one or more processors execute an action and a processor may take at most one action. Weak fairness of the sequence means that if any action in G is continuously enabled along the sequence, it is eventually chosen for execution. Maximality means that the sequence is either infinite, or it is finite and no action of G is enabled in the final global state. To compute the time complexity, we use the definition of round. This definition captures the execution rate of the slowest processor in any computation. Given a computation e (e E), the first round of e (let us call it e ) is the minimal prefix of e containing the execution of one action (an action of the protocol or a disabling action) of every enabled processor from the initial configuration. Let e be the suffix of e such that e = e e. The second round of e is the first round of e, and so on. Definition 1 (self-stabilization). Given L A a non-empty legitimacy predicate 2 an algorithm A is self-stabilizing iff the following two conditions hold: (i) Every computation of A starting from a configuration satisfying L A preserves L A ( closure). (ii) Every computation of A starting from an arbitrary configuration contains a configuration that satisfies L A ( convergence). A legitimate configuration for the Steiner Tree is a configuration that provides an instance of a tree T spanning S. Additionally, we expect a competitiveness W (T ) of log(z), i.e., W (T ) log(z), with S = z and T an optimal Steiner tree. In the following we propose a self-stabilizing Steiner tree algorithm. We expect our algorithm to be also superstabilizing [9]. That is, given a class of topology changes Λ and a passage predicate, an algorithm is superstabilizing with respect to Λ iff it is self-stabilizing, and for every computation 3 e beginning at a legitimate configuration and containing a single topology change event of type Λ, the passage predicate holds for every configuration in e. In the following we propose a self-stabilizing Steiner tree algorithm and extend it to a superstabilizing Steiner tree algorithm that copes with the addition/removal of a steiner member and the removal of a tree node/edge. During the tree restabilization the algorithm verifies a passage predicate detailed below. 2 A legitimacy predicate is defined over the configurations of a system and is an indicator of its correct behavior. 3 [9] use the notion of trajectory which is the computation of a system enriched with dynamic actions. 5

6 3. A centralized on-line Steiner tree algorithm As stated in Introduction section, several centralized approximation algorithms have been proposed for the Steiner tree problem. The best approximation ratio reached is a constant multiplicative approximation ratio of two, achieved in [15, 16]. These algorithms make the hypothesis that the set of steiner members is fixed and it is given as an input. However, this hypothesis cannot always be assumed, especially in the case that steiner members constitute a multicast group. Indeed, steiner members can dynamically join and leave the multicast group. A new Steiner tree must be computed after a modification of the multicast group to guarantee a good approximation solution. But this method is wasteful in time and maintenance. Moreover, the Steiner tree cannot be used until the computation and the setting up of the new one. In [5], Imase and Waxman introduced the Dynamic Steiner Tree (DST) problem which can be stated as follows. Let R = {r 0, r 1,..., r K } be a sequence of requests, where each r i is a pair (v i, ρ i ) such that v i V and ρ i {add,remove}. Let S i be the set of nodes in the multicast group after request r i, consisting of every node v for which there exists j i such that r j = (v j, add) and r l (v l, remove) for all j < l i. Given a graph G = (V, E), a nonnegative weight for each edge e E and a sequence of requests R, compute a sequence of multicast trees {T 1, T 2,..., T K } where T i spans S i and T i which is of minimum cost. The nonrearrangeable version (DST-N) of DST problem requires that any path added for request r j cannot be modified by any request r i, i > j. That is, if r i is an add request then we have T i 1 T i, otherwise for a remove request we have T i T i 1. Imase and Waxman [5] have shown that for any algorithm A solving DST-N problem with a request sequence containing only add requests, there exists an instance such that for every i, 0 < i K: A(S i ) OP T (S i ) log( S i 1), where K is the length of the request sequence, OP T (S i ) is the optimum solution for a tree spanning S i, and A(S i ) is the cost of the tree spanning S i generated by A. If both add and remove requests are allowed and if the tree cannot be rearranged (i.e., DST-N problem), then there is no upper bound on the worst case performance ratio [5]. Moreover, the authors proposed a Dynamic Greedy Algorithm (DGA) to solve DST-N problem [5] with a worst case performance ratio of two times from the optimum (i.e., a performance ratio of log( S i ) ), given a sequence with only add requests. Algorithm DGA constructs the solution as follows: 1. T 0 = ({v 0 }, ); S 0 = {v 0 }; 2. For each request r i, 1 i K (a) If r i is an add request then i. Let P i the shortest path from v i to T i 1 ii. T i := T i 1 P i ; S i := S i 1 {v i }; 6

7 (b) If r i is a remove request then i. S i := S i \{v i }; ii. Until there is a node v S i with degree 1 in T i do T i := T i \v; In Figure 1 an example is given to present the way Algorithm DGA constructs the Steiner tree. Figure 1(a) gives the considered network topology. In this example, there is a first request which adds the node a as a steiner member (see Figure 1(b)). A second request add the node b as a steiner member, in this case b is connected to a via a shortest path (bold line in Figure 1(c)). There are two new requests to add first node c and then node d as steiner members connected via a shortest path (see Figure 1(d) and (e) respectively). Finally, there is a last request to remove the node b from the set of steiner members (see Figure 1(f)). In spite a better Steiner tree can be constructed in the last configuration (i.e., Figure 1(f)), since b has descendants belonging to the set of steiner members the tree is not modified. This is the idea behind the proof to show that the worst case performance ratio of Algorithm DGA cannot be bounded to solve DST-N problem. a d 2 3 (a) a d 2 3 (d) b 2 1 c 1 b 2 1 c 1 a d 2 3 (b) a d 2 3 (e) b 2 1 c 1 b 2 1 c 1 a d 2 3 (c) a d 2 3 (f) b 2 1 c 1 b 2 1 c 1 Figure 1: Steiner tree construction done by the on-line algorithm DGA, where the steiner members are black nodes and Steiner tree edges are bold lines. (a) network topology, (b) configuration after the first request r 0 = (a, add), (c) request r 1 = (b, add) and b is connected to T 0 via a shortest path, (d) request r 2 = (c, add) and c is connected to T 1 via a shortest path, (e) request r 3 = (d, add) and d is connected to T 2 via a shortest path, (f) request r 4 = (b, remove) and b is no more a steiner member. Aharoni and Cohen [17] use the approach described above proposed by Imase and Waxman to construct a multicast tree in datagram networks. Add or remove requests are considered and they propose a scalable algorithm which guarantees a worst case performance ratio of log( S i ). The multicast tree is constructed as follows. To add a steiner member x, a request is sent by x to the root in order to obtain the list Z i 1 of steiner members already in the tree. The root sends the list to x and generates a new list Z i by adding x. Then, x selects the closest steiner member y Z i 1 from x, and asks y to add the shortest path between x and y in the multicast tree. This is done in the network by using a unicast protocol which uses the path corresponding to the edge {x, y} 7

8 in the complete distance graph of the network. This can introduce an extra cost due to the creation of local loops by the unicast protocol. The removal of a steiner member x requires the rearrangement of the tree, otherwise the inapproximability result of Imase and Waxman holds [5]. Therefore, x informs the root that it wants to leave the tree. The root removes x from the list Z i and the list Z i (u) of steiner members (which could be different) is sent to every child u of x. x can leave the tree and every child u is connected to the tree by following the procedure described to add a Steiner tree with the list Z i (u). A worst case performance ratio of log( S i ) is guaranteed with add/remove requests, because each child u is connected via a shortest path to a steiner member which belongs to Z i 1. That is as they have been connected without the add and remove of the steiner member x in the tree. In the next section, we present the stabilizing algorithm we propose to construct a dynamic Steiner tree. We do not use the approach proposed by Aharoni and Cohen, since the reconnection procedure could destabilize the tree. We use the approach given by Imase and Waxman and we allow the rearrangement of the Steiner tree to cope with the removal of steiner members. 4. Stabilizing distributed Steiner Tree Algorithm s3t This section describes a stabilizing algorithm for the Steiner tree problem, called s3t. It implements the technique proposed by Imase and Waxman presented in Section 3, in a stabilizing manner. In our implementation we assume a rooted network where the root is a special node chosen in the Steiner group. This node will also be the root of the constructed Steiner tree. The choice of the root is beyond the scope of the current paper. In the following we assume the system augmented with a leader oracle that returns to every node in the system its status: leader or follower. The single node that receives leader while invoking the leader oracle is the root and is allowed to execute the root code detailed in Section In the following, the root node is noted r. Several implementations for leader oracles fault-tolerant, stabilizing or dynamic can be found for example in [18, 19, 20]. Connection priority. In the sequential on-line algorithm [5], steiner members are connected via a shortest path following an order (defined by the sequence of requests). However, in this paper we consider that the system can start in an arbitrary state and steiner members may be connected in an erroneous manner. Therefore, we need some information to take into account the connection order of steiner members to be able to design a fault-tolerant distributed algorithm which follows the approach proposed by Imase and Waxman. When a node v becomes a steiner member in the network (i.e., Memb v equal to true), the node v obtains also a connection priority to join the tree, given by the system and denoted by Priority v. These priorities are unique and they are used by all the nodes to establish a connection order to the Steiner tree, as performed by the sequential algorithm using the sequence of requests. Hence, the connection request of a steiner member a has a highest priority than a steiner member b 8

9 if Priority a < Priority b. However, when a steiner member is disconnected from the Steiner tree (because of faults or topology changes) then we consider that the priority obtained previously is no more valid. Therefore, if a node v is disconnected while v is still a steiner member then a new connection priority is obtained using Function getpriority(v). This function can be seen locally by each node as to call to an oracle which delivers an initial connection priority by Priority v or a new one by getpriority(v). Note that we consider the root r has always the highest connection priority and we assume that the oracle delivers connection priorities in a monotonous way, that is priorities are given following an increasing order. This allows to model the time when the system registers steiner members which have to be connected. To construct the Steiner tree, our algorithm is composed of two parts: (i) all shortest distances to steiner members computation, and (ii) steiner members connection. In the former part, each node in the network (steiner member or not) computes its shortest distance to every steiner member in the network by exchanging in its neighborhood the distances it has computed. Only a steiner member can send a distance equal to zero, while nodes not in the steiner members set can only propagates distance values. Basically, distance values are propagated in the network from each steiner member which is considered as the root of a distinct shortest path tree. Note that we need to maintain for each node the shortest distances to every steiner member, since maintaining the distance between each node to r may not achieve a performance ratio of log(z). Indeed, the resulting tree would be a Shortest Path tree rooted at r with a cost O(z) times the cost of an optimum solution (see Figure 2 for an example). In the latter part, only the nodes of the network with distance values locally correct can participate to the connection of steiner members. Each node has two status: connected (i.e., belongs to the Steiner tree) or not. Every non steiner member node v which does not belong to the Steiner tree remains not connected. It waits the reception of a connection request initiated by a non connected steiner member. Every steiner member x which can participate to the second phase (if not already in the Steiner tree) can send asynchronously a connection request to join the Steiner tree. It tries to be connected via the shortest path (computed by the first part) to another steiner member y with a priority value lower than x s priority value. Each node (steiner member or not) maintains the lowest priority value among its descendants. If the connection is accepted, then an acknowledgment is sent back on the shortest path, enabling the nodes to change their status to connected. A faulty steiner member connection is detected if the following conditions are not satisfied: 1. the edges of the shortest path in the network between a steiner member x and the closest steiner member y of x with a lower priority than x s priority are part of T, 2. the ancestors of x in the Steiner tree maintain a connection request with a priority equal or smaller than the one maintained by x. 9

10 If the steiner member x or a node on the path used to connect x does not satisfy one of the above conditions, then it leaves the Steiner tree (i.e., has status not connected). This initiates the remove of the path, in order to force the connection of x via a correct path (i.e., a path satisfying the above conditions). Obviously, this procedure is also executed if x is no more a steiner member. r r r z z z z z z z z z z z z z (a) (b) (c) Figure 2: A (z/2)-approximation ratio between a Shortest Path tree and an optimal Steiner tree. (a) network topology with r the root and z the number of steiner members (black nodes), (b) a Shortest path tree connecting every steiner member to r of cost (z 1)z, (c) An optimal Steiner tree for the network given in (a) of cost z 2 + z = 2(z 1). Our self-stabilizing algorithm, called s3t, is a composition of two algorithms (Algorithm 1 and Algorithm 3) used to construct the approximated Steiner tree. In Subsection 4.1, we present the first part of the algorithm (associated to Algorithm 1) dedicated to the computation of all shortest paths to steiner members, then we present in Subsection 4.2 the part dedicated to the connection of steiner members (associated to Algorthm 3) All shortest paths to steiner members In this subsection, we are interested in the problem of constructing all shortest paths to every steiner member from each processor of the network. We can define the sub-graph containing all these shortest paths as in Definition 2. Definition 2 (All members shortest paths graph). Let G = (V, E, w) a weighted and undirected connected network and S V the set of steiner members. A graph G = (V G, E G ) of G contains all shortest paths to every steiner members from each node p V if the following conditions are satisfied: 1. V G = V and E G E, and 2. x S, G [x] is a connected graph (i.e., there exists a path in G [x] between any pair of nodes x, y V G ) with G [x] the union of paths between each y S and x, and 3. For each node p V G, the path between p and every node x S in G is a shortest path between p and x in G. We give a formal specification to the problem of computing all shortest paths to steiner members, stated in Specification 1. Specification 1. Let C the set of all possible configurations of the system. Let G = (V, E, w) a weighted graph and S V the set of steiner members. A selfstabilizing algorithm A ASP solving the problem of computing all shortest paths to every steiner member satisfies the following conditions: 10

11 Algorithm A ASP reaches a set of terminal configurations T C in finite time, and In every configuration γ T, there is a constructed sub-graph which satisfies Definition 2. In every configuration γ T, each node p V is aware of the priority P riority x of every steiner member x S All members shortest paths algorithm For any node v V, N(v) is the neighbors set of v in the network G (our algorithm is built upon a underlying self-stabilizing protocol that regularly updates the neighbor set of every node). For each node v, variable list dist v is a set which stores 5-uplets encoding information related to the distances between v V and every steiner member x S V. Every uplet is composed of five fields: the identifier of a steiner member x S (whose the uplet is related), connection priority given to the steiner member x (defined by Priority x ), the shortest distance between v and the steiner member x, the identifier of the neighbor which has propagated the distance to x, the status of the uplet to notice if the distance in the uplet can be considered or not, i.e., in status OK or Bad respectively. In the following, we use the notation t[i], 0 i 4, to denote the i-th field of a 5-uplet t in variable list dist v. Correct uplets. a uplet t list dist v is correct at v if: (i) there is no other uplet related to the same steiner member x (given by t[0]) in list dist v, and (ii) one of the following cases is verified: if t is related to v (i.e., t[0] = ID v ) then v must be a steiner member and t = [ID v, Priority v, 0, ID v, OK] (verified using Predicate CMemberInfo(v)) if t is related to steiner member x v then in this case the following conditions must be satisfied: 1. there is a neighbor u of v which maintains a uplet with a smaller distance and equal value on other fields (verified using Predicates BADInfo(v, t) and CInfo(v, t)), and 2. there is no shorter path towards x than the one stored in list dist v (i.e., Predicate BetterPath(v) is not satisfied at v) propagated by another neighbor than t[3]. 11

12 When all the uplets maintained in the network are correct, then for each uplet t list dist v at node v the field t[2] gives the distance of the shortest path to the related steiner member (given by t[0]). Moreover, the field t[3] stores the identifier of a neighbor of v on the shortest path towards this steiner member (which allows to determine this path). Additionally, the field t[1] indicates the connection priority associated to the steiner member t[0]. These fields are used by the second part of the algorithm. To maintain up to date its shortest path to every steiner member, each node v executes Rule DR (given in Algorithm 1) to modify its variable list dist v if needed. Indeed, if there is a uplet t list dist v at v which is not correct, then Predicate CLocalDist(v) is not satisfied and Rule DR is enabled. However, the set of all steiner members is not known in advance and a mechanism to delete incorrect uplets is necessary. It is not sufficient to only delete an incorrect uplet detected at a node v, this uplet must be removed from the network otherwise this uplet can be propagated forever (e.g., consider an incorrect uplet which has been propagated following a cycle before v has deleted it). To this end, the fifth field is used to notify that a uplet is incorrect (with Status Bad) before its deletion, otherwise a uplet has Status OK. The action of Rule DR is to execute Function M emberdistcompute(v) whose goal is to update variable list dist v. This function is composed of four parts: the first one to remove 5-uplets, the second one to change the status of 5-uplets from OK to Bad, the third one to manage 5-uplets related to v in case v is a steiner member, and the last one to add new 5-uplets related to better paths to reach steiner members. First of all, the execution of Function M emberdistcompute(v) removes a 5- uplet t if t relates no information on a shortest path for any steiner member, or t has Status Bad and v is not referred as the source of a 5-uplet related to t by a neighbor u (i.e., 5-uplets t satisfying Predicate DelInfo(v, t)). Note that, if there are several 5-uplets related to a steiner member in list dist v, then they are also removed except the only one given by Macro MinElt. This macro selects among all uplets associated to the same steiner member the uplet x with Status Bad to force its deletion (if there is one, OK otherwise), which has the shortest distance (and smallest connection priority, if several). The status OK of any 5-uplet t is changed to Bad if neither v is the source nor a neighbor of v has propagated the 5-uplet associated to the steiner member of t, or the distance or the status are not correct (i.e., 5-uplets t satisfying Predicate BadInfo(v, t)). Indeed before the removal of a uplet t by a node v, it has to verify that no other node has propagated t. To this end, the status of uplets is used to make this verification and to propagate the deletion. When the status of a uplet is changed from OK to Bad by a node, this modification is propagated down on the propagation path. Thus, each node v can remove a uplet t list dist v with Status Bad when v has no neighbor u with a uplet related to t (see Predicate Del(v, t)). Note that a cycle in a propagation path of a uplet t is detected using the uplets distance. Finally, variable list dist v is modified to maintain a single 5-uplet associated to v (in case v is a steiner member) and to also maintain 12

13 Inputs: N(v): set of (locally) ordered neighbors of v; ID v : unique identifier of node v; Memb v : boolean to tree if v is a steiner member, false otherwise; Priority v : integer indicating the connection priority given to v; Variable: list dist v : set of shortest distances leading to steiner members; Macros: Card(list, ID) = {t list : t[0] = ID} MinEltS(list, ID, S)= min{t list : t[0] = ID t[4] = S ( t list :: t [0] = t[0] t [4] = t[4] (t [2] t[2] { (t [2] = t[2] t [1] > t[1])))} MinEltS(list, ID, Bad) If MinEltS(list, ID, Bad) MinElt(list, ID) = MinEltS(list, ID, OK) Otherwise MinDist(v, ID) = min u N(v) {t[2] + w(u, v) : t list dist u t[0] = ID CNgInfo(v, list dist u, t)} Update(v) = {t list dist u : u N(v) u = NMinDist(v, t[0]) BestPath(v, u, t)} NMinDist(v, ID) = min{u N(v) : ( t list dist u :: t[0] = ID CNgInfo(v, list dist u, t) t[2] + w(u, v) = MinDist(v, ID))} Predicates: DelInfo(v,t) (Card(list dist v, t[0]) > 1 t MinElt(list dist v, t[0])) Del(v, t) Del(v, t) t[4] = Bad ( u N(v) :: ( t list dist u :: t [0] t[0] t [3] ID v t [2] t[2])) BadInfo(v, t) t[4] = OK [(t[0] = ID v Memb v ) CInfo(v, t)] CInfo(v, t) t[0] ID v (t[3] N(v) Card(list dist t[3], t[0]) = 1 ( t list dist t[3] :: t [0] = t[0] t [1] = t[1] MemberInfo(v, t) t [2] + w(t[3], v) t[2] t [4] = t[4])) t[0] = ID v t[1] = Priority v t[2] = 0 t[3] = ID v t[4] = OK CMemberInfo(v) Card(list dist v, ID v ) = 1 ( t list dist v :: MemberInfo(v, t)) BetterPath(v) ( u N(v) :: ( t list dist u :: CNgInfo(v, list dist u, t ) ( t list dist v :: t [0] = t[0] (t[4] = OK t [2] + w(u, v) < t[2])))) CLocalDist(v) (Memb v CMemberInfo(v)) BetterPath(v) ( t list dist v :: BadInfo(v, t) DelInfo(v, t)) CNgInfo(v, list, t) t[3] ID v t[4] = OK Card(list, t[0]) = 1 BestPath(v, u, t) CNgInfo(v, list dist u, t) ( t list dist v :: t [0] t[0] t[2] + w(u, v) = MinDist(v, t[0])) Table 2: Data structures for all member shortest distances computation for any v V 13

14 information related to a shortest path for each steiner member (i.e., Predicates CMemberInfo(v) and BetterPath(v) are satisfied respectively). Remarks. Variable list dist v is used to store a dynamic set of uplets whose size can be arbitrary high because of the initial configuration of the system. Therefore, we consider that each node has a memory with a bounded size. More precisely, we assume a bound of O(Z log(n)) bits at each node, with Z a upper bound on the number of steiner members and N a upper bound on the size of the network. Since each uplet uses O(log N) bits, each node has to maintain at most one uplet related to the shortest path towards every steiner members. Therefore, a node can add a better path towards a steiner member x either if it has no uplet related to x or a uplet in Status OK with a higher distance. Algorithm 1 All member shortest distances computation for any v V Function MembersDistCompute(v) { If ( t list dist v :: DelInfo(v, t)) then list dist v := list dist v \{t list dist v : DelInfo(v, t)}; If ( t list dist v :: BadInfo(v, t)) then list dist v := list dist v {[t[0], t[1], t[2], t[3], Bad] : t list dist v BadInfo(v, t)}; list dist v := list dist v \{t list dist v : BadInfo(v, t)}; If Memb v CMemberInfo(v) then list dist v := list dist v \{t list dist v : t[0] = ID v }; list dist v := list dist v {[ID v, Priority v, 0, ID v, OK]}; If BetterPath(v) then list dist v := list dist v \{t list dist v : t[3] N(v) t[4] = OK t[2] > MinDist(v, t[0])}; list dist v := list dist v {[t[0], t[1], t[2] + w(nmindist(v, t[0]), v), NMinDist(v, t[0]), OK] : t Update(v)}; } Algorithm: DR: (All shortest member distances) If CLocalDist(v) then MembersDistCompute(v); 4.2. Steiner tree construction In this subsection, we are interested in the problem of constructing an approximate Steiner tree. We assume there are a designated processor of the network, noted r, and a dynamic set of steiner members S V in a weighted 14

15 and undirected connected network G = (V, E, w). We consider the construction of a tree T rooted at r spanning all nodes in S. Moreover, we expect the weight of T is at most log( S ) times the weight of an optimal solution. We can define this sub-graph as in Definition 3. Definition 3 (Approximate Steiner tree). Let G = (V, E, w) a weighted and undirected connected network with a designated node r and a set S V of steiner members. A sub-graph T = (V T, E T ) of G is called a Steiner tree if the following conditions are satisfied: 1. S V T V and E T E, and 2. T is a connected and acyclic graph (i.e., there exists a single path in T between any pair of nodes x, y V T ), and 3. for every steiner member x S (except the root r), the shortest path in G between x and the closest steiner member y S x to x is contained in T, with S x S such that z S, z S x if we have Priority z < Priority x. We give a formal specification to the problem of constructing an approximate Steiner tree, stated in Specification 2. Specification 2. Let C the set of all possible configurations of the system. Let G = (V, E, w) a weighted graph with a designated node r and a set S V of steiner members. A self-stabilizing algorithm A ST solving the problem of constructing an approximate Steiner tree satisfies the following conditions: Algorithm A ST time, and reaches a set of terminal configurations T C in finite In every configuration γ T, there is a constructed sub-graph T which satisfies Definition 3 and whose weight W (T ) is at most equal to log( S ) W (T ), with W (H) = e H w(e) and W (T ) the weight of an optimal Steiner tree T in G Steiner members connection algorithm In this subsection, we explain the way all steiner members are connected in order to construct the Steiner tree. A formal description of the second part of our algorithm is given in Algorithm 3. Each node v V has several inputs: (i) IsRoot(v) indicates to v if it is the root r or not (input from the leader oracle), (ii) Memb v is equal to true if v is a steiner member and in this case (iii) Priority v gives the connection priority of v, finally (iv) list dist v gives to v a set of shortest distance to each steiner member (computed by Algorithm 1). In the following, we consider that for each node v V the information maintained in variable list dist v are locally correct regarding neighbors states of v (i.e., Predicate CLocalDist(v) is satisfied), otherwise the rules given in Algorithm 3 are not enabled at v. 15

16 Variables. Every node v maintains six variables for constructing and maintaining the Steiner tree. p v : identifier of the parent of node v in the current tree; L v : (Level) number of nodes on the path between the root r and v in the Steiner tree, 0 if v is not connected; need v : true if v is a steiner member (i.e., v S V ) or v has a connection request from a descendant, false otherwise; connect v : true if v is connected to the Steiner tree, false otherwise; memb connect v : identifier of a steiner member to which v must be connected, otherwise; color v : the lowest priority among steiner members descendant of v in the Steiner tree (including v s priority if v is a steiner member), otherwise; Algorithm 2 Algorithm: Steiner members connection for any v V RR: (Root reinitialization) If IsRoot v CLocalDist(v) CRoot(v) then p v := ID v ; L v := 0; need v := true; connect v := true; memb connect v := ID v ; color v := Priority v ; AR: (Nodes which need to be connected) If IsRoot v CLocalDist(v) NCNode(v) ( Memb v AskC(v) > 0) then need v := true; color v := MinColor(v); memb connect v := MembConnect(v); p v := PConnect(v); CR: (Nodes which must be connected) If IsRoot v CLocalDist(v) connect v needp v connectp v CNode(v) then connect v := true; L v := Lp v + 1; IR: (Nodes which are not correctly or must not be connected) If IsRoot v CLocalDist(v) [( need v NCNode(v)) (need v CNode(v))] then p v := ID v ; L v := 0; connect v := false; need v := false; memb connect v := ; color v := ; If Memb v then getpriority(v); Description of the algorithm. Every node v V sends periodically its local variables to each of its neighbors using InfoMsg messages. Upon the reception of this message each neighbor updates the local copy of its neighbor variables. 16

17 Inputs:. N(v): set of (locally) ordered neighbors of v;. ID v : unique identifier of node v;. IsRoot v : boolean to true if v is the leader (root), false otherwise;. Memb v : boolean to tree if v is a steiner member, false otherwise;. Priority v : integer indicating the connection priority given to v;. list dist v : set of shortest distances leading to steiner members (use of Algorithm 1); Variables:. p v : identifier of a neighbor selected as parent of v in the Steiner tree, otherwise;. L v : level (in hops) between r and v in the Steiner tree, 0 otherwise;. need v : boolean to true if v is a steiner member or has a connection request, false otherwise;. connect v : boolean to true if v is in the Steiner tree, false otherwise;. memb connect v : steiner member identifier to which v is connected, otherwise;. color v : integer equal to the smallest priority value among children of v, otherwise; Macros: AskC(v) = {u N(v) : p u = ID v need u ( connect v connect u )} min(min{color u : u AskC(v)}, Priority v ) If Memb v MinColor(v)= min{color u : u AskC(v)} If min{color u : u AskC(v)} { Otherwise min{t[3] : t list distv t[0] = memb connect PConnect(v)= v } If it exists ID v Otherwise MinConnect(v) If Memb v Priority v = MinColor(v) If color MembConnect(v) = v = memb connect u : u N(v) Otherwise color u = MinColor(v) AllowedConnect(v) = {t list dist v : t[1] < color v t[4] = OK} MinConnect(v) = min{t[0] : t AllowedConnect(v) ( t AllowedConnect(v), (t[2], t[1]) (t [2], t [1]))} Predicates: CRoot(v) p v = ID v L v = 0 connect v memb connect v = ID v color v = Priority v need v CLocalDist(v) (Memb v CMemberInfo(v)) BetterPath(v) ( t list dist v :: BadInfo(v, t) DelInfo(v, t)) NCNode(v) p v = ID v L v = 0 need v = false connect v = false memb connect v = color v = CNode(v) need v ( Memb v AskC(v) > 0) color v = MinColor(v) memb connect v = MembConnect(v) p v = PConnect(v) ID v (connect v (L v = Lp v + 1 needp v connectp v )) Table 3: Data structures and Functions for Steiner members connection for any v V 17

18 The description of a InfoMsg message is as follows: InfoMsg v [u] = InfoMsg, list dist v, p v, L v, need v, connect v, memb connect v, color v. Moreover, upon the reception of a InfoMsg message each node corrects its local state by executing several rules (given in Algorithm 1 and 3) and then broadcasts its new local state in its neighborhood. Root of the Steiner tree We assume that each node v can check if it is the root of the Steiner tree using an oracle (e.g., a leader election algorithm) whose response is given by IsRoot v. Thus, according to the description of the oracle only a unique node can be the leader, i.e., the root r. In a correct state (defined by Predicate CRoot(r)), the root r has no parent and a level equal to zero (i.e., p r = ID r and L r = 0). Moreover, variables need r and connect r are equal to true since we consider r is always connected (it always belongs to the Steiner tree). The root r has no connection request to send, thus variable memb connect r must be equal to. Finally, r has always the smallest connection priority value among steiner members given by Priority r (we assume that r is the member registered by the system before the other ones) and color r = Priority r. Whenever the state of the root is not correct (i.e., Predicate CRoot(r) is not satisfied), then Rule RR can be executed (only) by r. Member connection We call connected node a node v belonging to the Steiner tree, i.e., with variable connect v to true. Each non connected steiner member x tries to connect to the Steiner tree. To this end, x uses information given by list dist x to select the nearest steiner member y with a lower priority value than its priority (see Macro MembConnect(x)). x sends a connection request to a neighbor on the shortest path towards y (designated as its parent by Macro PConnect(x)) by executing Rule AR which sets variable need x to true. Furthermore, a connection request contains the priority of x and the identifier of the steiner member y chosen by x to be connected to the Steiner tree. These information are stored in variables color x and memb connect x respectively. Moreover, the parent designated by x is stored in variable p x. A node u V receiving a connection request (i.e., AskC(u) > 0) have to transmit this request: (i) if u is not connected, or (ii) u is connected but with a highest priority value than the priority of the received request. We call requesting the path used to transmit a connection request between a non connected steiner member and the Steiner tree. In Case (i), u executes Rule AR to select a neighbor z as its parent on the shortest path towards the steiner member indicated in the request (see Macro PConnect(u)). Then, u transmits to z this connection request which is forwarded along the requesting path. In Case (ii), u reinitiates its state to non connected by executing Rule IR (since Predicate CNode(v) is not satisfied) and Case (i) is performed. When a connected node u receives a connection request, an acknowledgment is sent back using Rule CR along the requesting path enabling every node on this path to become connected by setting their variable connect u to true and to obtain a correct level in the Steiner tree (i.e., L u = L parentu + 1). 18

19 Notice that a non connected node u may receive several connection requests. In this case, u transmits only the connection request with the lowest priority value sent by its neighbor. A steiner member transmits its connection request, unless it receives a connection request from a descendant with a lower priority value than its own connection priority. State reinitialization Our algorithm can start from an arbitrary global state, so each node must be able to detect erroneous local states. We do not consider here the root node r (see the paragraph Root of the Steiner tree above concerning the root for a description of r s correct state). Each node v (steiner member or not) can have three correct local states in Algorithm 3: v is (i) not connected with no connection request, (ii) not connected with a connection request and (iii) is connected. In Case (i), v has no request (i.e., variable need v = false) so variables color v and memb connect v equal. Moreover, v does not belong to the Steiner tree (i.e., variable connect v = false) so variables p v and L v equal to ID v and zero respectively. Note that a node v in Case (i) satisfies Predicate NCNode(v). In Cases (ii) and (iii), v has a connection request (i.e., variable need v = true) so either v is a steiner member or it receives its connection request from a neighbor (i.e., AskC(v) > 0). In these cases, variable color v must be equal to the lowest request priority value given by Macro MinColor(v), while variable memb connect v must be equal to the steiner member identifier of the associated request (given by Macro MembConnect(v)). To transmit the connection request, v must select a neighbor and variable p v must be equal to the neighbor s identifier given by PConnect(v). In Case (iii), v is connected (i.e., variable connect v equals true) so v must have a correct level according to its parent and its parent must also be connected to the Steiner tree (i.e., we have L v = Lp v + 1, needp v = true and connectp v = true). A node v in the state of Case (ii) or (iii) satisfies Predicate CNode(v). Note that erroneous initial configurations may create local states which are not described by Cases (i), (ii) or (iii) (e.g., cycles in the parent link).therefore, if a node v is in a local state such that neither Predicate NCNode(v) nor Predicate CNode(v) are satisfied, then v is considered to be in an incorrect state. Hence, Rule IR is executed at v to reset v s state to the correct state defined in Case (i), i.e., v is not connected and it has no connection request Algorithms composition Algorithm s3t is obtained by composition of Algorithm 1 and Algorithm 3. These two algorithms are composed together at each processor p V with a conditional composition (first introduced in [21]): Algorithm 1 Cond(p) Algorithm 3, where each guard g of the actions of Algorithm 3 at each processor p V has the form Cond(p) g with Predicate CLocalDist(p) given in formal description of Algorithm 3. Using this composition, each processor p V can execute Algorithm 1 to compute the distance of the shortest path to every steiner member in the network. When the distances are locally correct at p, Predicate CLocalDist(p) 19

Fast self-stabilizing k-independent dominating set construction Labri Technical Report RR

Fast self-stabilizing k-independent dominating set construction Labri Technical Report RR Fast self-stabilizing k-independent dominating set construction Labri Technical Report RR-1472-13 Colette Johnen Univ. Bordeaux, LaBRI, UMR 5800, F-33400 Talence, France Abstract. We propose a fast silent

More information

Self-Stabilizing Silent Disjunction in an Anonymous Network

Self-Stabilizing Silent Disjunction in an Anonymous Network Self-Stabilizing Silent Disjunction in an Anonymous Network Ajoy K. Datta 1, Stéphane Devismes 2, and Lawrence L. Larmore 1 1 Department of Computer Science, University of Nevada Las Vegas, USA 2 VERIMAG

More information

A Self-Stabilizing Distributed Approximation Algorithm for the Minimum Connected Dominating Set

A Self-Stabilizing Distributed Approximation Algorithm for the Minimum Connected Dominating Set A Self-Stabilizing Distributed Approximation Algorithm for the Minimum Connected Dominating Set Sayaka Kamei and Hirotsugu Kakugawa Tottori University of Environmental Studies Osaka University Dept. of

More information

1 Introduction. 1.1 The Problem Domain. Self-Stablization UC Davis Earl Barr. Lecture 1 Introduction Winter 2007

1 Introduction. 1.1 The Problem Domain. Self-Stablization UC Davis Earl Barr. Lecture 1 Introduction Winter 2007 Lecture 1 Introduction 1 Introduction 1.1 The Problem Domain Today, we are going to ask whether a system can recover from perturbation. Consider a children s top: If it is perfectly vertically, you can

More information

On Stabilizing Departures in Overlay Networks

On Stabilizing Departures in Overlay Networks On Stabilizing Departures in Overlay Networks Dianne Foreback 1, Andreas Koutsopoulos 2, Mikhail Nesterenko 1, Christian Scheideler 2, and Thim Strothmann 2 1 Kent State University 2 University of Paderborn

More information

A Self-Stabilizing Minimal Dominating Set Algorithm with Safe Convergence

A Self-Stabilizing Minimal Dominating Set Algorithm with Safe Convergence A Self-Stabilizing Minimal Dominating Set Algorithm with Safe Convergence Hirotsugu Kakugawa and Toshimitsu Masuzawa Department of Computer Science Graduate School of Information Science and Technology

More information

6.852: Distributed Algorithms Fall, Class 24

6.852: Distributed Algorithms Fall, Class 24 6.852: Distributed Algorithms Fall, 2009 Class 24 Today s plan Self-stabilization Self-stabilizing algorithms: Breadth-first spanning tree Mutual exclusion Composing self-stabilizing algorithms Making

More information

Maximum Metric Spanning Tree made Byzantine Tolerant

Maximum Metric Spanning Tree made Byzantine Tolerant Algorithmica manuscript No. (will be inserted by the editor) Maximum Metric Spanning Tree made Byzantine Tolerant Swan Dubois Toshimitsu Masuzawa Sébastien Tixeuil Received: date / Accepted: date Abstract

More information

Self-Stabilizing Small k-dominating Sets

Self-Stabilizing Small k-dominating Sets Self-Stabilizing Small k-dominating Sets Ajoy K. Datta, Stéphane Devismes, Karel Heurtefeux, Lawrence L. Larmore, Yvan Rivierre Verimag Research Report n o TR-2011-6 January 11, 2012 Reports are downloadable

More information

arxiv: v1 [cs.dc] 3 Oct 2011

arxiv: v1 [cs.dc] 3 Oct 2011 A Taxonomy of aemons in Self-Stabilization Swan ubois Sébastien Tixeuil arxiv:1110.0334v1 cs.c] 3 Oct 2011 Abstract We survey existing scheduling hypotheses made in the literature in self-stabilization,

More information

Dynamic FTSS in Asynchronous Systems: the Case of Unison

Dynamic FTSS in Asynchronous Systems: the Case of Unison Dynamic FTSS in Asynchronous Systems: the Case of Unison Swan Dubois, Maria Potop-Butucaru, Sébastien Tixeuil To cite this version: Swan Dubois, Maria Potop-Butucaru, Sébastien Tixeuil. Dynamic FTSS in

More information

Snap-Stabilizing Depth-First Search on Arbitrary Networks

Snap-Stabilizing Depth-First Search on Arbitrary Networks The Author 006. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org Advance Access

More information

Preserving the Fault-Containment of Ring Protocols Executed on Trees

Preserving the Fault-Containment of Ring Protocols Executed on Trees The Computer Journal Advance Access published November 21, 2008 # The Author 2008. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved. For Permissions,

More information

Dominating Set. Chapter 26

Dominating Set. Chapter 26 Chapter 26 Dominating Set In this chapter we present another randomized algorithm that demonstrates the power of randomization to break symmetries. We study the problem of finding a small dominating set

More information

Dominating Set. Chapter Sequential Greedy Algorithm 294 CHAPTER 26. DOMINATING SET

Dominating Set. Chapter Sequential Greedy Algorithm 294 CHAPTER 26. DOMINATING SET 294 CHAPTER 26. DOMINATING SET 26.1 Sequential Greedy Algorithm Chapter 26 Dominating Set Intuitively, to end up with a small dominating set S, nodes in S need to cover as many neighbors as possible. It

More information

Snap-Stabilizing PIF and Useless Computations

Snap-Stabilizing PIF and Useless Computations Snap-Stabilizing PIF and Useless Computations Alain Cournier Stéphane Devismes Vincent Villain LaRIA, CNRS FRE 2733 Université de Picardie Jules Verne, Amiens (France) Abstract A snap-stabilizing protocol,

More information

Fault-Tolerant Consensus

Fault-Tolerant Consensus Fault-Tolerant Consensus CS556 - Panagiota Fatourou 1 Assumptions Consensus Denote by f the maximum number of processes that may fail. We call the system f-resilient Description of the Problem Each process

More information

Distributed Systems Principles and Paradigms. Chapter 06: Synchronization

Distributed Systems Principles and Paradigms. Chapter 06: Synchronization Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science Room R4.20, steen@cs.vu.nl Chapter 06: Synchronization Version: November 16, 2009 2 / 39 Contents Chapter

More information

Time Optimal Asynchronous Self-stabilizing Spanning Tree

Time Optimal Asynchronous Self-stabilizing Spanning Tree Time Optimal Asynchronous Self-stabilizing Spanning Tree Janna Burman and Shay Kutten Dept. of Industrial Engineering & Management Technion, Haifa 32000, Israel. bjanna@tx.technion.ac.il, kutten@ie.technion.ac.il

More information

AGREEMENT PROBLEMS (1) Agreement problems arise in many practical applications:

AGREEMENT PROBLEMS (1) Agreement problems arise in many practical applications: AGREEMENT PROBLEMS (1) AGREEMENT PROBLEMS Agreement problems arise in many practical applications: agreement on whether to commit or abort the results of a distributed atomic action (e.g. database transaction)

More information

Dominating Set. Chapter 7

Dominating Set. Chapter 7 Chapter 7 Dominating Set In this chapter we present another randomized algorithm that demonstrates the power of randomization to break symmetries. We study the problem of finding a small dominating set

More information

Distributed Systems Principles and Paradigms

Distributed Systems Principles and Paradigms Distributed Systems Principles and Paradigms Chapter 6 (version April 7, 28) Maarten van Steen Vrije Universiteit Amsterdam, Faculty of Science Dept. Mathematics and Computer Science Room R4.2. Tel: (2)

More information

Theoretical Computer Science. Partially dynamic efficient algorithms for distributed shortest paths

Theoretical Computer Science. Partially dynamic efficient algorithms for distributed shortest paths Theoretical Computer Science 411 (2010) 1013 1037 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs Partially dynamic efficient algorithms

More information

Section 6 Fault-Tolerant Consensus

Section 6 Fault-Tolerant Consensus Section 6 Fault-Tolerant Consensus CS586 - Panagiota Fatourou 1 Description of the Problem Consensus Each process starts with an individual input from a particular value set V. Processes may fail by crashing.

More information

A Snap-Stabilizing DFS with a Lower Space Requirement

A Snap-Stabilizing DFS with a Lower Space Requirement A Snap-Stabilizing DFS with a Lower Space Requirement Alain Cournier, Stéphane Devismes, and Vincent Villain LaRIA, CNRS FRE 2733, Université de Picardie Jules Verne, Amiens (France) {cournier, devismes,

More information

More on NP and Reductions

More on NP and Reductions Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data

More information

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Jean-Claude Bermond a,b,, Bi Li b,a,c, Nicolas Nisse b,a, Hervé Rivano d, Min-Li Yu e a Univ. Nice Sophia Antipolis, CNRS,

More information

Causal Consistency for Geo-Replicated Cloud Storage under Partial Replication

Causal Consistency for Geo-Replicated Cloud Storage under Partial Replication Causal Consistency for Geo-Replicated Cloud Storage under Partial Replication Min Shen, Ajay D. Kshemkalyani, TaYuan Hsu University of Illinois at Chicago Min Shen, Ajay D. Kshemkalyani, TaYuan Causal

More information

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016) FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016) The final exam will be on Thursday, May 12, from 8:00 10:00 am, at our regular class location (CSI 2117). It will be closed-book and closed-notes, except

More information

arxiv: v1 [cs.dc] 22 Oct 2018

arxiv: v1 [cs.dc] 22 Oct 2018 FANTOM: A SCALABLE FRAMEWORK FOR ASYNCHRONOUS DISTRIBUTED SYSTEMS A PREPRINT Sang-Min Choi, Jiho Park, Quan Nguyen, and Andre Cronje arxiv:1810.10360v1 [cs.dc] 22 Oct 2018 FANTOM Lab FANTOM Foundation

More information

Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation

Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation Viveck R. Cadambe EE Department, Pennsylvania State University, University Park, PA, USA viveck@engr.psu.edu Nancy Lynch

More information

Clocks in Asynchronous Systems

Clocks in Asynchronous Systems Clocks in Asynchronous Systems The Internet Network Time Protocol (NTP) 8 Goals provide the ability to externally synchronize clients across internet to UTC provide reliable service tolerating lengthy

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Stavros Tripakis Abstract We introduce problems of decentralized control with communication, where we explicitly

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

The Las-Vegas Processor Identity Problem (How and When to Be Unique)

The Las-Vegas Processor Identity Problem (How and When to Be Unique) The Las-Vegas Processor Identity Problem (How and When to Be Unique) Shay Kutten Department of Industrial Engineering The Technion kutten@ie.technion.ac.il Rafail Ostrovsky Bellcore rafail@bellcore.com

More information

Distributed Algorithms

Distributed Algorithms Distributed Algorithms December 17, 2008 Gerard Tel Introduction to Distributed Algorithms (2 nd edition) Cambridge University Press, 2000 Set-Up of the Course 13 lectures: Wan Fokkink room U342 email:

More information

THE WEAKEST FAILURE DETECTOR FOR SOLVING WAIT-FREE, EVENTUALLY BOUNDED-FAIR DINING PHILOSOPHERS. A Dissertation YANTAO SONG

THE WEAKEST FAILURE DETECTOR FOR SOLVING WAIT-FREE, EVENTUALLY BOUNDED-FAIR DINING PHILOSOPHERS. A Dissertation YANTAO SONG THE WEAKEST FAILURE DETECTOR FOR SOLVING WAIT-FREE, EVENTUALLY BOUNDED-FAIR DINING PHILOSOPHERS A Dissertation by YANTAO SONG Submitted to the Office of Graduate Studies of Texas A&M University in partial

More information

Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast. Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas

Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast. Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas Slides are partially based on the joint work of Christos Litsas, Aris Pagourtzis,

More information

6.852: Distributed Algorithms Fall, Class 10

6.852: Distributed Algorithms Fall, Class 10 6.852: Distributed Algorithms Fall, 2009 Class 10 Today s plan Simulating synchronous algorithms in asynchronous networks Synchronizers Lower bound for global synchronization Reading: Chapter 16 Next:

More information

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Jean-Claude Bermond a,, Bi Li a,b, Nicolas Nisse a, Hervé Rivano c, Min-Li Yu d a Coati Project, INRIA I3S(CNRS/UNSA), Sophia

More information

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Rachid Guerraoui Petr Kouznetsov Distributed Programming Laboratory EPFL Abstract Recent papers [7, 9] define the weakest failure detector

More information

Routing Algorithms. CS60002: Distributed Systems. Pallab Dasgupta Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur

Routing Algorithms. CS60002: Distributed Systems. Pallab Dasgupta Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur Routing Algorithms CS60002: Distributed Systems Pallab Dasgupta Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur Main Features Table Computation The routing tables must be computed

More information

The Weakest Failure Detector to Solve Mutual Exclusion

The Weakest Failure Detector to Solve Mutual Exclusion The Weakest Failure Detector to Solve Mutual Exclusion Vibhor Bhatt Nicholas Christman Prasad Jayanti Dartmouth College, Hanover, NH Dartmouth Computer Science Technical Report TR2008-618 April 17, 2008

More information

Graph-theoretic Problems

Graph-theoretic Problems Graph-theoretic Problems Parallel algorithms for fundamental graph-theoretic problems: We already used a parallelization of dynamic programming to solve the all-pairs-shortest-path problem. Here we are

More information

A An Overview of Complexity Theory for the Algorithm Designer

A An Overview of Complexity Theory for the Algorithm Designer A An Overview of Complexity Theory for the Algorithm Designer A.1 Certificates and the class NP A decision problem is one whose answer is either yes or no. Two examples are: SAT: Given a Boolean formula

More information

Asynchronous Models For Consensus

Asynchronous Models For Consensus Distributed Systems 600.437 Asynchronous Models for Consensus Department of Computer Science The Johns Hopkins University 1 Asynchronous Models For Consensus Lecture 5 Further reading: Distributed Algorithms

More information

Local Distributed Decision

Local Distributed Decision Local Distributed Decision Pierre Fraigniaud CNRS and University Paris Diderot Paris, France pierre.fraigniaud@liafa.jussieu.fr Amos Korman CNRS and University Paris Diderot Paris, France amos.korman@liafa.jussieu.fr

More information

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 1709-1718 (2008) A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JI-CHERNG LIN, TETZ C. HUANG, CHENG-PIN

More information

cs/ee/ids 143 Communication Networks

cs/ee/ids 143 Communication Networks cs/ee/ids 143 Communication Networks Chapter 5 Routing Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Warning These notes are not self-contained, probably not understandable, unless you also

More information

On Equilibria of Distributed Message-Passing Games

On Equilibria of Distributed Message-Passing Games On Equilibria of Distributed Message-Passing Games Concetta Pilotto and K. Mani Chandy California Institute of Technology, Computer Science Department 1200 E. California Blvd. MC 256-80 Pasadena, US {pilotto,mani}@cs.caltech.edu

More information

Coordination. Failures and Consensus. Consensus. Consensus. Overview. Properties for Correct Consensus. Variant I: Consensus (C) P 1. v 1.

Coordination. Failures and Consensus. Consensus. Consensus. Overview. Properties for Correct Consensus. Variant I: Consensus (C) P 1. v 1. Coordination Failures and Consensus If the solution to availability and scalability is to decentralize and replicate functions and data, how do we coordinate the nodes? data consistency update propagation

More information

Distributed Algorithms (CAS 769) Dr. Borzoo Bonakdarpour

Distributed Algorithms (CAS 769) Dr. Borzoo Bonakdarpour Distributed Algorithms (CAS 769) Week 1: Introduction, Logical clocks, Snapshots Dr. Borzoo Bonakdarpour Department of Computing and Software McMaster University Dr. Borzoo Bonakdarpour Distributed Algorithms

More information

Introduction to Self-Stabilization. Maria Potop-Butucaru, Franck Petit and Sébastien Tixeuil LiP6/UPMC

Introduction to Self-Stabilization. Maria Potop-Butucaru, Franck Petit and Sébastien Tixeuil LiP6/UPMC Introduction to Self-Stabilization Maria Potop-Butucaru, Franck Petit and Sébastien Tixeuil LiP6/UPMC Self-stabilization 101 Example U 0 = a U n+1 = U n 2 if U n is even U n+1 = 3U n+1 2 if U n is odd

More information

Computer Science Technical Report

Computer Science Technical Report Computer Science Technical Report Synthesizing Self-Stabilization Through Superposition and Backtracking Alex Klinkhamer and Ali Ebnenasir Michigan Technological University Computer Science Technical Report

More information

Approximation of δ-timeliness

Approximation of δ-timeliness Approximation of δ-timeliness Carole Delporte-Gallet 1, Stéphane Devismes 2, and Hugues Fauconnier 1 1 Université Paris Diderot, LIAFA {Carole.Delporte,Hugues.Fauconnier}@liafa.jussieu.fr 2 Université

More information

Timo Latvala. March 7, 2004

Timo Latvala. March 7, 2004 Reactive Systems: Safety, Liveness, and Fairness Timo Latvala March 7, 2004 Reactive Systems: Safety, Liveness, and Fairness 14-1 Safety Safety properties are a very useful subclass of specifications.

More information

Private and Verifiable Interdomain Routing Decisions. Proofs of Correctness

Private and Verifiable Interdomain Routing Decisions. Proofs of Correctness Technical Report MS-CIS-12-10 Private and Verifiable Interdomain Routing Decisions Proofs of Correctness Mingchen Zhao University of Pennsylvania Andreas Haeberlen University of Pennsylvania Wenchao Zhou

More information

SFM-11:CONNECT Summer School, Bertinoro, June 2011

SFM-11:CONNECT Summer School, Bertinoro, June 2011 SFM-:CONNECT Summer School, Bertinoro, June 20 EU-FP7: CONNECT LSCITS/PSS VERIWARE Part 3 Markov decision processes Overview Lectures and 2: Introduction 2 Discrete-time Markov chains 3 Markov decision

More information

FAIRNESS FOR DISTRIBUTED ALGORITHMS

FAIRNESS FOR DISTRIBUTED ALGORITHMS FAIRNESS FOR DISTRIBUTED ALGORITHMS A Dissertation submitted to the Faculty of the Graduate School of Arts and Sciences of Georgetown University in partial fulfillment of the requirements for the degree

More information

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Publication date 2014 Copyright 2014 Melinda Tóth, Zoltán Horváth Supported by TÁMOP-412A/1-11/1-2011-0052

More information

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11 Undirected Graphs Undirected graph. G = (V, E) V = nodes. E = edges between pairs of nodes. Captures pairwise relationship between objects. Graph size parameters: n = V, m = E. V = {, 2, 3,,,, 7, 8 } E

More information

Queue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling

Queue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling 1 Queue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling Saswati Sarkar and Koushik Kar Abstract In this paper, we consider queue-length stability in wireless

More information

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees , 2009 Lecture 5: Shortest Paths & Spanning Trees University of Twente m.uetz@utwente.nl wwwhome.math.utwente.nl/~uetzm/dw/ Shortest Path Problem "#$%&'%()*%"()$#+,&- Given directed "#$%&'()*+,%+('-*.#/'01234564'.*,'7+"-%/8',&'5"4'84%#3

More information

Consistent Global States of Distributed Systems: Fundamental Concepts and Mechanisms. CS 249 Project Fall 2005 Wing Wong

Consistent Global States of Distributed Systems: Fundamental Concepts and Mechanisms. CS 249 Project Fall 2005 Wing Wong Consistent Global States of Distributed Systems: Fundamental Concepts and Mechanisms CS 249 Project Fall 2005 Wing Wong Outline Introduction Asynchronous distributed systems, distributed computations,

More information

Heuristic Search Algorithms

Heuristic Search Algorithms CHAPTER 4 Heuristic Search Algorithms 59 4.1 HEURISTIC SEARCH AND SSP MDPS The methods we explored in the previous chapter have a serious practical drawback the amount of memory they require is proportional

More information

Shared Memory vs Message Passing

Shared Memory vs Message Passing Shared Memory vs Message Passing Carole Delporte-Gallet Hugues Fauconnier Rachid Guerraoui Revised: 15 February 2004 Abstract This paper determines the computational strength of the shared memory abstraction

More information

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO OUTLINE q Contribution of the paper q Gossip algorithm q The corrected

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Consensus and Universal Construction"

Consensus and Universal Construction Consensus and Universal Construction INF346, 2015 So far Shared-memory communication: safe bits => multi-valued atomic registers atomic registers => atomic/immediate snapshot 2 Today Reaching agreement

More information

Termination Problem of the APO Algorithm

Termination Problem of the APO Algorithm Termination Problem of the APO Algorithm Tal Grinshpoun, Moshe Zazon, Maxim Binshtok, and Amnon Meisels Department of Computer Science Ben-Gurion University of the Negev Beer-Sheva, Israel Abstract. Asynchronous

More information

From self- to self-stabilizing with service guarantee 1-hop weight-based clustering

From self- to self-stabilizing with service guarantee 1-hop weight-based clustering From self- to self-stabilizing with service guarantee 1-hop weight-based clustering Colette Johnen and Fouzi Mekhaldi LaBRI, University of Bordeaux, CNRS. F-33405 Talence Cedex, France Abstract. We propose

More information

Distributed Computing in Shared Memory and Networks

Distributed Computing in Shared Memory and Networks Distributed Computing in Shared Memory and Networks Class 2: Consensus WEP 2018 KAUST This class Reaching agreement in shared memory: Consensus ü Impossibility of wait-free consensus 1-resilient consensus

More information

Min/Max-Poly Weighting Schemes and the NL vs UL Problem

Min/Max-Poly Weighting Schemes and the NL vs UL Problem Min/Max-Poly Weighting Schemes and the NL vs UL Problem Anant Dhayal Jayalal Sarma Saurabh Sawlani May 3, 2016 Abstract For a graph G(V, E) ( V = n) and a vertex s V, a weighting scheme (w : E N) is called

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Symmetric Rendezvous in Graphs: Deterministic Approaches

Symmetric Rendezvous in Graphs: Deterministic Approaches Symmetric Rendezvous in Graphs: Deterministic Approaches Shantanu Das Technion, Haifa, Israel http://www.bitvalve.org/~sdas/pres/rendezvous_lorentz.pdf Coauthors: Jérémie Chalopin, Adrian Kosowski, Peter

More information

Fault Masking in Tri-redundant Systems

Fault Masking in Tri-redundant Systems Fault Masking in Tri-redundant Systems Mohamed G. Gouda 1, Jorge A. Cobb 2, and Chin-Tser Huang 3 1 Department of Computer Sciences The University of Texas at Austin gouda@cs.utexas.edu 2 Department of

More information

Eventually consistent failure detectors

Eventually consistent failure detectors J. Parallel Distrib. Comput. 65 (2005) 361 373 www.elsevier.com/locate/jpdc Eventually consistent failure detectors Mikel Larrea a,, Antonio Fernández b, Sergio Arévalo b a Departamento de Arquitectura

More information

Lecture 1 : Data Compression and Entropy

Lecture 1 : Data Compression and Entropy CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for

More information

Automata-Theoretic Model Checking of Reactive Systems

Automata-Theoretic Model Checking of Reactive Systems Automata-Theoretic Model Checking of Reactive Systems Radu Iosif Verimag/CNRS (Grenoble, France) Thanks to Tom Henzinger (IST, Austria), Barbara Jobstmann (CNRS, Grenoble) and Doron Peled (Bar-Ilan University,

More information

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017 CMSC CMSC : Lecture Greedy Algorithms for Scheduling Tuesday, Sep 9, 0 Reading: Sects.. and. of KT. (Not covered in DPV.) Interval Scheduling: We continue our discussion of greedy algorithms with a number

More information

CS/COE

CS/COE CS/COE 1501 www.cs.pitt.edu/~nlf4/cs1501/ P vs NP But first, something completely different... Some computational problems are unsolvable No algorithm can be written that will always produce the correct

More information

Fast and compact self-stabilizing verification, computation, and fault detection of an MST

Fast and compact self-stabilizing verification, computation, and fault detection of an MST Fast and compact self-stabilizing verification, computation, and fault detection of an MST Amos Korman, Shay Kutten, Toshimitsu Masuzawa To cite this version: Amos Korman, Shay Kutten, Toshimitsu Masuzawa.

More information

CS505: Distributed Systems

CS505: Distributed Systems Cristina Nita-Rotaru CS505: Distributed Systems. Required reading for this topic } Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson for "Impossibility of Distributed with One Faulty Process,

More information

Content-based Publish/Subscribe using Distributed R-trees

Content-based Publish/Subscribe using Distributed R-trees Content-based Publish/Subscribe using Distributed R-trees Silvia Bianchi, Pascal Felber and Maria Gradinariu 2 University of Neuchâtel, Switzerland 2 LIP6, INRIA-Université Paris 6, France Abstract. Publish/subscribe

More information

Searching for Black Holes in Subways

Searching for Black Holes in Subways Searching for Black Holes in Subways Paola Flocchini Matthew Kellett Peter C. Mason Nicola Santoro Abstract Current mobile agent algorithms for mapping faults in computer networks assume that the network

More information

SDS developer guide. Develop distributed and parallel applications in Java. Nathanaël Cottin. version

SDS developer guide. Develop distributed and parallel applications in Java. Nathanaël Cottin. version SDS developer guide Develop distributed and parallel applications in Java Nathanaël Cottin sds@ncottin.net http://sds.ncottin.net version 0.0.3 Copyright 2007 - Nathanaël Cottin Permission is granted to

More information

Agreement. Today. l Coordination and agreement in group communication. l Consensus

Agreement. Today. l Coordination and agreement in group communication. l Consensus Agreement Today l Coordination and agreement in group communication l Consensus Events and process states " A distributed system a collection P of N singlethreaded processes w/o shared memory Each process

More information

Dynamic Noninterference Analysis Using Context Sensitive Static Analyses. Gurvan Le Guernic July 14, 2007

Dynamic Noninterference Analysis Using Context Sensitive Static Analyses. Gurvan Le Guernic July 14, 2007 Dynamic Noninterference Analysis Using Context Sensitive Static Analyses Gurvan Le Guernic July 14, 2007 1 Abstract This report proposes a dynamic noninterference analysis for sequential programs. This

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

Unreliable Failure Detectors for Reliable Distributed Systems

Unreliable Failure Detectors for Reliable Distributed Systems Unreliable Failure Detectors for Reliable Distributed Systems A different approach Augment the asynchronous model with an unreliable failure detector for crash failures Define failure detectors in terms

More information

Dynamic Group Communication

Dynamic Group Communication Dynamic Group Communication André Schiper Ecole Polytechnique Fédérale de Lausanne (EPFL) 1015 Lausanne, Switzerland e-mail: andre.schiper@epfl.ch Abstract Group communication is the basic infrastructure

More information

L R I SELF-STABILIZING SPANNING TREE ALGORITHM FOR LARGE SCALE SYSTEMS HERAULT T / LEMARINIER P / PERES O / PILARD L / BEAUQUIER J

L R I SELF-STABILIZING SPANNING TREE ALGORITHM FOR LARGE SCALE SYSTEMS HERAULT T / LEMARINIER P / PERES O / PILARD L / BEAUQUIER J L R I SELF-STABILIZING SPANNING TREE ALGORITHM FOR LARGE SCALE SYSTEMS HERAULT T / LEMARINIER P / PERES O / PILARD L / BEAUQUIER J Unité Mixte de Recherche 8623 CNRS-Université Paris Sud LRI 08/2006 Rapport

More information

Distributed Constraints

Distributed Constraints Distributed Constraints José M Vidal Department of Computer Science and Engineering, University of South Carolina January 15, 2010 Abstract Algorithms for solving distributed constraint problems in multiagent

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

Automatic Synthesis of Distributed Protocols

Automatic Synthesis of Distributed Protocols Automatic Synthesis of Distributed Protocols Rajeev Alur Stavros Tripakis 1 Introduction Protocols for coordination among concurrent processes are an essential component of modern multiprocessor and distributed

More information

Distributed Exact Weighted All-Pairs Shortest Paths in Õ(n5/4 ) Rounds

Distributed Exact Weighted All-Pairs Shortest Paths in Õ(n5/4 ) Rounds 58th Annual IEEE Symposium on Foundations of Computer Science Distributed Exact Weighted All-Pairs Shortest Paths in Õ(n5/4 ) Rounds Chien-Chung Huang CNRS, École Normale Supérieure France Danupon Nanongkai

More information

The Weakest Failure Detector for Wait-Free Dining under Eventual Weak Exclusion

The Weakest Failure Detector for Wait-Free Dining under Eventual Weak Exclusion The Weakest Failure Detector for Wait-Free Dining under Eventual Weak Exclusion Srikanth Sastry Computer Science and Engr Texas A&M University College Station, TX, USA sastry@cse.tamu.edu Scott M. Pike

More information

A Time Optimal Self-Stabilizing Synchronizer Using A Phase Clock

A Time Optimal Self-Stabilizing Synchronizer Using A Phase Clock A Time Optimal Self-Stabilizing Synchronizer Using A Phase Clock Baruch Awerbuch Shay Kutten Yishay Mansour Boaz Patt-Shamir George Varghese December, 006 Abstract A synchronizer with a phase counter (sometimes

More information