Continuous-Time Consensus under Non-Instantaneous Reciprocity

Size: px
Start display at page:

Download "Continuous-Time Consensus under Non-Instantaneous Reciprocity"

Transcription

1 Continuous-Time Consensus under Non-Instantaneous Reciprocity Samuel Martin and Julien M. Hendrickx arxiv: v3 [cs.sy] 19 Oct 2015 Abstract We consider continuous-time consensus systems whose interactions satisfy a form of reciprocity that is not instantaneous, but happens over time. We show that these systems have certain desirable properties: They always converge independently of the specific interactions taking place and there exist simple conditions on the interactions for two agents to converge to the same value. This was until now only known for systems with instantaneous reciprocity. These results are of particular relevance when analyzing systems where interactions are a priori unknown, being for example endogenously determined or random. We apply our results to an instance of such systems. I. INTRODUCTION We consider systems where n agents each have a value x i R that evolves according to n ẋ i = a ij (t)(x j (t) x i (t)), (1) j=1 where the a ij (t) 0 are non-negative functions of time. This means that the value of x i is continuously attracted by the values of the agents j for which a ij (t) 0. These systems are called consensus systems because the interactions tend to reduce the disagreement between the interacting agents, and because any consensus state where all x i are equal is an equilibrium of the system. Analogous systems also exist in discrete time [1] [3]. Consensus systems play a major role in decentralized control [4], data fusion [5], [6] and distributed optimization [7], [8], but also when modeling some animal [9], [10] or social phenomena [11], [12]. General convergence results for consensus systems involve connectivity assumptions that are hard to check for statedependent interactions, and do not allow treating clustering phenomena. As detailed in the state of the art, more recent results guarantee convergence to one or several clusters under various assumptions on the symmetry or reciprocity of the interactions. All these reciprocity properties have however to Julien Hendrickx is with the ICTEAM institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium. julien.hendrickx@uclouvain.be Samuel Martin is with Université de Lorraine and CNRS, CRAN, UMR 7039, 2 Avenue de la Forêt de Haye, Vandœuvre-lès-Nancy, France (part of the work was carried out when S. M. was with the ICTEAM institute). samuel.martin@univ-lorraine.fr This work is supported by the Belgian Network DYSCO (Dynamical Systems, Control, and Optimization), funded by the Interuniversity Attraction Poles Program, initiated by the Belgian Science Policy Office, and by the Concerted Research Action (ARC) of the French Community of Belgium. This work is also partly supported by the French Agence Nationale de la Recherche under ANR COMPACS - Computation Aware Control Systems, ANR-13-BS and by the CNRS via the interdisciplinary PEPS Project MADRES. be satisfied instantaneously and at every time. We extend them to treat systems where reciprocity is not instantaneous but happens on average over time. This extension only holds under certain assumptions on the way reciprocity occurs. Indeed, non-instantaneous reciprocity may fail to ensure convergence and lead to oscillatory behaviors when the interaction weights are noroperly bounded, or when the time periods across which it occurs grow unbounded (see Section III-B for an example). To prove our result we show that, for an appropriate sequence of times t k, the states x(t k ) can be seen as the trajectory of a certain discrete time consensus system. By analyzing the effect of each matrix of this system on some artificial initial conditions, we obtain bounds on their coefficients, and show that this system satisfies reciprocity conditions guaranteeing convergence. The rest of the paper is organized as follows. The introduction includes a state of the art on consensus systems, a subsection pointing out the interest of non-instantaneous reciprocity and a summary of our contributions. Section II formally introduces the system that we are considering and presents our main results. Examples illustrating our results and the necessity of an underlying assumption are then presented in Section III. In Section IV, we demonstrate the use of our results on a specific multi-agent applications. Sections V and V-B contain the proofs, and we finish by some conclusions in Section VI. State of the art Consensus systems have been the object of many studies during the recent years, focusing particularly on finding conditions under which the system converges, possibly to a consensus state, and also on the speed of convergence. Classical results typically guarantee convergence to consensus under some (repeated) connectivity conditions on the interactions, see for example [1], [3], [13] or [14], [15] for surveys. A variation of this repeated connectivity condition was also recently proposed in [16] for certain classes of statedependent interactions where the attraction magnitude should be non-decreasing with distance between agents positions. It involves a graph defined by connecting a node to another when the dynamics of the former is sufficiently and repeatedly influenced by the latter and this being true for all positions of the two agents. Different recent works have shown that stronger results hold when the interactions satisfy some form of reciprocity. Hendrickx and Tsitsiklis have for example introduced the cutbalance assumption on the interactions [17], stating that there

2 exists a K such that for every subset S of agents and time t, there holds a ij (t) K a ji (t). (2) i S,j S i S,j S This assumption can actually be shown to mean that whenever an agent i influences agent j indirectly, agent j also influences agent i indirectly, with an intensity that is within a constant ratio of that of i on j. Particular cases of this assumptions include symmetric interactions a ij = a ji, bounded-ratio symmetry a ij Ka ji, or any average-preserving dynamics j a ij = j a ji for every i. It was shown in [17] that systems satisfying the cut-balance assumption (2) always converge, though not necessarily to consensus. Moreover, two agents values converge to the same limiting value if they are connected by a path in the graph of persistent interactions (also called unbounded interactions in the literature), defined by connecting i and j if 0 a ij (t)dt is infinite. These results allow analyzing the convergence properties of systems with relatively complex interactions; see the discussion in [17] for an example in opinion dynamics, or [18] for an application to system involving event-based ternary control of second order agents. Martin and Girard have later shown [19] that in the case of convergence to a global consensus, the cut-balance assumption could be weakened, allowing for the interaction ratio bound K to slowly grow with the amount of interactions that have already taken place in the system. They also provide an estimate of the convergence speed in terms of the interactions having taken place. Related convergence results were also proved for systems involving a continuum of agents under a strict symmetry assumption in [20]. An alternative reciprocity condition called arc-balance was considered in [21]; it requires all weights a ij (t) to be within a constant ratio of each other, except those for which 0 a ij (t)dt <. Finally, we note that similar results of convergence under some reciprocity conditions have been obtained for discrete time consensus systems, see for example [3], [22] [25]. However, none of these results allow for non-instantaneous reciprocity. Non-instantaneous reciprocity All the results taking advantage of reciprocity require the reciprocity condition to be satisfied instantaneously at (almost) all times. They would thus not apply to systems that are essentially reciprocal, but where the reciprocity may be delayed, or where it happens over time: In systems relying on certain wired or wireless network protocols, agents may be unable to simultaneously send and receive information, resulting in loss of instantaneous reciprocity, even if the interactions are meant to be reciprocal. Non-instantaneous reciprocity also arises in a priori symmetric systems where the control of the agents is event-triggered or self-triggered. Indeed, suppose that at some time the conditions are such that agents i and j should interact. It is very likely that one agent will update its control action before the other, so that during a certain interval of time the actual interactions will not be symmetric. Similar problems are present in systems prone to occasional failures, or unreliable communications, where the communication between two agents can temporarily be interrupted in one direction for a limited amount of time. Issues with non-instantaneous reciprocity may also arise in swarming processes or any multi-agent control problem where sensors have a limited scope. Suppose indeed that the sensors are not omnidirectional, as it is for example the case for human or animal eyes. It is then generally impossible for an agent to observe all its neighbors at the same time. The same issue arises if the agent can only treat a limited number of neighbors simultaneously. A natural solution is then to observe a subset of the neighbors and to periodically modify the subset being observed. This can for example be achieved by continuously rotating the directions in which observations are made. In that case, even if the neighborhood relation is symmetrical, it is again highly likely that an agent i will sometime observe an agent j without that j is observing i at thaarticular moment, but that j will observe i later. In all these situations, one could hope to take advantages of the essential reciprocity of the system design even if this reciprocity is not always instantaneously satisfied. Contributions We show in our main result (Theorem 1) that the convergence of systems of the form (1) is still guaranteed if the system satisfies some form of non-instantaneous reciprocity, or reciprocity on average. More specifically, we assume that the cut-balance condition (2) is satisfied on average on a sequence of contiguous intervals. These intervals can have arbitrary lengths, but the amount of interaction taking place during each of them should be uniformly bounded. Under these assumptions, we show that the system always converges. Moreover, two agent values converge to the same limit if they are connected by a path in the graph of persistent interactions, defined by connecting two agents i, j if t=0 a ij(t)dt is infinite. We also particularize our general result to systems satisfying a form of pairwise reciprocity over bounded time intervals. This particularized result is more conservative, but its condition can often be easier to check. We illustrate it on an application. II. PROBLEM STATEMENT AND MAIN RESULTS We study the integral version of the consensus system (1): x i (t) = x i (0) + t 0 n a ij (s)(x j (s) x i (s))ds, (3) j=1 where for all i, j N = {1,..., n}, the interaction weight a ij is a non-negative measurable function of time, summable on bounded intervals of R +. There exists a unique function of time x : R + R n which satisfies for all t R + the integral equation (3), and it is locally absolutely continuous

3 (see Theorem 54 and Proposition C.3.8 in [26, pages ]). This function is actually the Caratheodory solution to the differential equation (1) and can equivalently be defined as absolutely continuous function satisfying (1) at almost all times. We call it the trajectory of the system. Following the discussion in the Introduction, we introduce a new condition generalizing Condition 2 by allowing for noninstantaneous reciprocity of interactions; we only require that the reciprocity occurs on the integral weights a ij (s)ds over some bounded time intervals. Assumption 1 (Integral weight reciprocity): There exists a sequence ( ) p N of increasing times with lim p + = + and some uniform bound K 1 such that, for all nonempty proper subsets S of N, and for all p N, there holds a ij (t)dt K a ji (t)dt. (4) We will see in a simple example in Section III-B that Assumption 1 alone is not sufficient to guarantee the convergence of the system. We need to further assume that the integral of the interactions taking place in each interval [, +1 ] is uniformly bounded. Assumption 2 (Uniform upper bound on integral weights): The sequence ( ) used in Assumption 1 is such that a ij (t)dt M, holds for all i, j N, p N and some constant M. We now state our main result, whose proof is presented in Section V-A. Theorem 1: Suppose that the interaction weights of system (3) satisfy Assumptions 1 (integral reciprocity) and 2 (upper bound on weight integral). Then, every trajectory x of system (3) converges. Moreover, let G = (N, E) be the graph of persistent weights defined by connecting (j, i) if a 0 ij (t)dt = +. Then, there is a directed path from i to j in G if and only if there is a directed path from j to i, and there holds in that case lim t x i (t) = lim t x j (t). The second part of the theorem implies that there is a local consensus in each strongly connected component 1 of the graph G of persistent interactions. Notice that the second part of the theorem also implies that each strongly connected component is fully disconnected from the others in graph G : no edge leaves one component to arrive at another. This is due to the reciprocity Assumption 1. Assumption 1 generalizes most (instantaneous) reciprocity conditions available in the literature, including cut-balance, 1 Strongly connected components are defined as the classes of equivalence on the node set where node i and j belong to the same class if and only if i and j are connected to each other by at least a path from i to j and a path from j to i. and is thus automatically satisfied by any system satisfying such conditions. It is moreover satisfied by classes of systems subject to some form of reciprocity that is delayed due for example to communication constraints. It applies for instance to systems where agents engage in interaction with a neighbor while the latter may be asleep or already busy interacting with another agent. We provide an example of such application in Section IV. There are several options to check whether a given (noninstantaneously reciprocal) system verifies the integral condition in Assumption 1. One option is to show that it is implied by the specific reciprocal nature of the system, as done in Section IV. Another one is to derive sufficient conditions on the initial configuration which implies that the integral condition remains valid over time (see for instance [27]). The reciprocity conditions and the time intervals over which it has to be satisfied are global. We now introduce a new local assumption that we will show to imply Assumptions 1 and 2 when interactions are bounded. It requires that whenever an agent j influences an agent i at some time t, both agents should influence each other with a sufficient strength across a certain time interval around t. Assumption 3 (Pairwise reciprocity): There exists a constant ε > 0 such that for every unordered pair {i, j} with i, j N distinct, there exists a constant T ij > 0 such that for all t 0, if a ij (t) > 0 or a ji (t) > 0, then there exists t ij, t ij such that a) t ij t ij T ij, b) t [t ij, t ij ], c) t ij t ij a ij (t)dt ε and t ij t ij a ji (t)dt ε. Assumption 3 provides a way of verifying noninstantaneous reciprocity entirely locally, by considering separately each pair of nodes. For instance, reciprocal weights of type a ij (t) = 1 + ( 1) t and a ji (t) = 1 + ( 1) ωt+γ satisfy the pairwise non-instantaneous reciprocity for any constants ω > 0, γ R, although one of the weights may be null while the other is not. The following Theorem is proved in Section V-B. Theorem 2: Suppose that the interaction weights a ij (t) of system (3) satisfy Assumption 3 and are uniformly bounded above by some constant M. Then they satisfy Assumptions 1 and 2, and the conclusions of Theorem 1 hold. Remark 1: Theorem 1 and Theorem 2 are stated for systems where the coefficients a ij (t) only depend on time, and the proof of Theorem 1 actually uses that fact. However, these results can directly be extended to solutions of systems with state-dependent coefficients ã ij (t, x), with typically a ij (t, x) depending on x i and x j. Indeed, suppose that x is a solution of x i (t) = x(0) + t 0 ã ij (s, x(s))(x j (s) x i (s))ds, (5) then x is also a solution of the linear time-varying systems (3) with ad hoc coefficients a ij (t) = ã ij (t, x(t)), and Theorem

4 t 2 [2p, 2p + 1] 1/p 1" 4" 1/p 2 1/p 2 t 2 [2p +1, 2p + 2] 1/p 1" 4" (a) 1/p 2 1/p 2 1/p 1/p 2" 3" 2" 3" t 2 [2p, 2p + 1] 1/p 2 1/p 1 1" 2" 3" 4" t 2 [2p +1, 2p + 2] 1/p 2 1/p 1 1" 2" 3" 4" Fig. 1. Representations of the interactions taking place in example 1 (a) and in example 2 (b) in Section III-A, and of the connected components of the graph of persistent interactions, in which local consensuses occur. 1 applies to that linear time-varying system. Verifying if such nonlinear systems satisfy Assumption 1 can be achieved when the structure of the interactions guarantees a sufficient reciprocity. We will see on an example in Section IV how this can be done. Note also that the existence or uniqueness of a solution to nonlinear systems of the form (5) is in general a complex issue. Similar extensions apply to randomized weights a ij. Finally, one can verify that Theorem 1 and Theorem 2 can be extended to systems with agent values x i in R n provided that the weights a ij remain scalar. It suffices indeed in that case to apply the result separately to each component of the states x i. III. EXAMPLES (b) A. System with non-instantaneous reciprocity In this subsection, we present two simple 4-agent systems whose convergence can be established by Theorem 1 and by no other result on consensus available in the literature. Example 1: Our first example is depicted in Fig. 1(a). It contains two weakly interacting subsystems, inside each of which two agents succesively attract each other. More specifically, the interactions start at time t = 2 and are defined as follows: For every p 1, if t [2p, 2p + 2], a 12 = a 21 = a 34 = a 43 = 1/p 2, if t [2p, 2p + 1], a 32 = a 41 = 1/p, if t [2p + 1, 2p + 2], a 23 = a 14 = 1/p, and all values of a ij (t) that are not explicitly defined are equal to 0. One can verify that this system satisfies Assumptions 1 and 2 with = 2p, K = 1 and M = 2. We can thus apply Theorem 1 to establish its convergence. The graph of persistent interactions can also easily be built and contains the edges (2, 3), (3, 2), (1, 4) and (4, 1). There are thus two connected components {2, 3} and {1, 4}, and two local consensuses x 2 = x 3 and x 1 = x 4. On the other hand, notice that the system does not satisfy any instantaneous reciprocity condition, so none of available reciprocity-based results applies. Moreau s result does not apply either due to the weak interactions in 1/p 2 between the subsystems (the interactions are not lower bounded; see Section 3.3 in [19] for a detailed explanation), and because it can only imply convergence to a global consensus while this system produces two local consensuses. Observe also that our result also applies if the interactions are interrupted during arbitrarily long periods. Suppose indeed that the interactions defined above do not take place during the intervals [2p, 2p+1] and [2p + 1, 2p + 2] but during the intervals [p 2, p 2 + 1] and [p 2 + p, p 2 + p + 1]. Assumptions 1 and 2 still apply with = p 2. Example 2: The second example involves a chain of four agents, which are attracted by their higher index neighbor for t [2p, 2p+1] and their lower index neighbor for t [2p + 1, 2p + 2], as depicted in Fig. 1(b). Moreover, the ratios between weights of the different interactions grow unbounded. Specifically, the interactions start again at t = 2, and for each p 1, if t [2p, 2p + 1], a 12 = 1/p 2, a 23 = 1/p and a 34 = 1 if t [2p+1, 2p+2], a 21 = 1/p 2, a 32 = 1/p and a 43 = 1 and all values of a ij (t) that are not explicitly defined are equal to 0. One can verify again that Assumptions 1 and 2 hold with = 2p, K = 1 and M = 2, so that the convergence of the system follows from Theorem 1. The graph of persistent interactions contains the edges (2, 3), (3, 2), (3, 4) and (4, 3), resulting in a local (trivial) consensus of agent 1, and a consensus between agent 2, 3 and 4. Again, the system satisfies no instantaneous reciprocity condition, so none of available reciprocity-based results applies. Moreover, all the results of which we are aware and that do not rely on reciprocity require the interaction to be bounded from above and from below, and establish convergence to a global consensus (see [28] for example). Since the ratios between the values of a 34, a 43 and a 32, a 23 grow unbounded and the system produces again two local consensuses, it would thus be impossible to apply them. This remains the case even if we restrict our attention to the connected component {2, 3, 4} and/or re-scale the values of the coefficients by scaling time. Besides, Theorem 1 would again apply exactly in the same way if the interactions were interrupted during arbitrary long periods of time B. Oscillatory behavior under integral reciprocity - Necessity of Assumption 2. The following Proposition formalizes the fact that Assumption 1 alone is not sufficient to guarantee convergence. Proposition 3: There exist systems of the form (3) satisfying Assumption 1 (integral reciprocity) and that admit nonconverging trajectories.

5 To prove the Proposition, we present a 3-agent system which satisfies Assumption 1 (reciprocity) but whose trajectory does not converge. The idea is to have agent 2 oscillating between agents 1 and 3 that successively attract the former while remaining at a certain distance from each other, as depicted in Fig. 2. Agent 1 starts influencing 2. Since we only impose integral reciprocity, a 12 and a 21 do not have to be non-zero simultaneously. Also, because there is no uniform bound on influence, the distance between 2 and 1 has become arbitrarily close to 0 when agent 2 starts influencing back. So the overall influence of agent 2 over 1, this is a 12 (x 2 x 1 )dt over some time interval, can also be made arbitrarily small. This leads to an actual influence of 1 over 2 but not of 2 over 1. The same happens between 3 and 1, leading to convergence of 1 and 3 to distinct limits and oscillations of 2. We now present the formal proof. Proof: Let (ρ p ) p N be a non-decreasing sequence such that ρ p 1, for all p N. Let us consider a system with 3 agents where x 1 (0) = 0, x 2 (0) = 1/2 and x 3 (0) = 1 and with the dynamics given by system (3) with weights if t [4p, 4p + 1), a 21 (t) = ρ p, if t [4p + 1, 4p + 2), a 12 (t) = ρ p, if t [4p + 2, 4p + 3), a 23 (t) = ρ p, if t [4p + 3, 4p + 4), a 32 (t) = ρ p, where only the non-zero weights have been detailed. Fig. 2 illustrates the dynamics of this system p 4p+1 4p+2 4p+3 4p+4 Fig. 2. Dynamics of the 3-agent system. Here, Assumption 1 holds with K = 1 for = 4p. It is easy to see that x 1 (t) is non-decreasing, x 3 (t) is non-increasing and x 1 (t) x 2 (t) x 3 (t) for all t 0. Integrating the dynamics of the system, we can show that for all p N: x 1 (4p + 4) = x 1 (4p + 2) x 2 (4p + 2) = x 2 (4p + 1) = (1 e ρp )x 1 (4p) + e ρp x 2 (4p) (1 e ρp )x 1 (4p) + e ρp x 3 (0), and that x 3 (4p + 4) x 2 (4p + 4) = x 2 (4p + 3) = e ρp x 2 (4p + 2) + (1 e ρp )x 3 (4p + 2) e ρp x 1 (4p) + (1 e ρp )x 3 (4p) e ρp x 1 (0) + (1 e ρp )x 3 (4p). t Combining the two previous results and the initial conditions gives us then 1+(x 3(4p+4) x 1(4p+4)) (1 e ρp ) (1 + (x 3(4p) x 1(4p)). We observe that the term 1+(x 3 (4p) x 1 (4p)) remains larger than the product (1+(x 3 (0) x 1 (0)))Π p p =0 (1 e ρ p ). Taking a sequence ρ p growing sufficiently fast (and thus breaking the uniform bound Assumption 2), one can make this term converge to a value arbitrarily close to its initial value 2. Then, (x 3 (4p)) and (x 1 (4p)) do not converge to the same value. As a consequence, one can verify that x 2 will keep oscillating between x 1 and x 3. Hence, the system does not converge. IV. APPLICATION TO MOBILE ROBOTS WITH INTERMITTENT ULTRASONIC COMMUNICATION In this section we apply our results to a realistic system of mobile robots evolving in the plane R 2 and communicating using ultrasonic sensors. These sensors make for an affordable and thus widespread contactless mean of measuring distances [29], but are subject to certain limitation as detailed below. The objective of the group of robots is to achieve practical rendezvous, i.e. all robots should eventually lie in a ball of a certain maximal radius (see e.g. [30]). The robots have several functional constraints. The ultrasonic sensors in use are not accurate when measuring distances smaller than a radius d 0 > 0, thus we assume that the robots cannot make use of such measurements and are blind at short range. Also, the robots engines are limited and the velocity of each robot cannot exceed a maximum of µ > 0 in norm. Most importantly, in order to save energy, the robots activate their sensors intermittently, and in an asynchronous way: Robot i wakes up at every time t i k, and monitors its environment over the time-interval [t i k, ti k + δ min], for some δ min > 0. (For simplicity, we take the same δ min for every robot, but this is not crucial for our result). In addition, we assume that the sequence (t i k ) satisfies ti k+1 ti k [δ min, δ max ] for every k N, for some δ max > δ min, and t i 0 δ max. We will provide a simple control law for the robots ensuring some form of non-instantaneous reciprocity. Our result in Section II will then allow us to establish (i) the convergence of all roboositions, and (ii) asymptotic practical consensus, that is, all robots eventually lie at a distance from each other smaller than a certain threshold. This threshold is proportional to d 0, the distance below which robots cannot sense each other. Since it converges, the system will not suffer from infinite oscillatory behaviors as in the example presented in Section III-B. To the best of our knowledge, such results cannot be obtained with any other convergence result available in the literature. One reason for this is that most results on consensus in the literature apply to systems which converge to a single consensus. This is clearly not the case for the system considered here since agents stop interacting at short distance. Our control law can be expressed as the following saturated consensus equation: ẋ i (t) = sat j N b ij (t)(x j (t) x i (t)), (6)

6 d 1 i 1 activate sensor 2 engage t=t k i activation time t j h at which the distance between i and j was higher than d 1. This could for example be achieved by having j sending a message to i at t j h. : Under these communication rules, we have the desired result d 0 3 deactivate sensor t=t k i +δ min Proposition 4: Consider system (6) where interaction occurs according to Conditions (7) and (8). Also assume there holds 4δ max µ d 1 d 0. (9) j reciprocate activate sensor deactivate sensor t=t h j t=t h j +δ min Fig. 3. Representations of the interactions taking place in the group of mobile robots with intermittent ultrasonic communication presented in Section IV. Events 1, 2 and 3 occur successively and so do events 4, 5 and 6. Event 4 occurs after event 1 and the following condition holds : t j h [ti k, ti k +δmax]. When 2 occurs, a ij (t) > 0 and when 4 occurs a ji (t) > 0. Proposition 4 provides conditions which guarantee that event 4 always takes place when event 2 has occurred, this ensures interaction reciprocity. Then, the group of robots asymptotically achieves practical rendezvous: x i = lim t x i (t) exists for every i N, and lim (t) d 1, t where (t) = max i,j N x i (t) x j (t). Proof: Observe first that system (6) can be rewritten under the form of system (3) with µ b ij (t) a ij (t) = b ik (t)(x k (t) x i (t)) k N (10) where the b ij (t) will be specified later, and the function sat : R n R n is defined by { µ x sat(x) = x if x µ x otherwise. The saturation guarantees that the magnitude of the velocity of each robot remains below its limit. We now explicit how the interaction weights b ij are set. The idea is represented in Fig. 3: For t [t i k, ti k + δ min], agent i monitors its environment. At this time, agent i sets b ij (t) to 1 whenever either one of the two following situations occurs : 1) its distance to j is larger than some appropriate radius d 1 > d 0 (engage), or 2) its distance to j is larger than d 0 and j has recently been influenced by i (b ji = 1) because j was at a distance larger than d 1 from i at that time (reciprocate). The latter part of the algorithm is designed to ensure reciprocity, and the presence of d 1 is needed to ensure that i and j remain sufficiently distant for measurement to be made when i or j need to reciprocate. Formally, we set b ij (t) = 0 by default, and set it to 1 in two cases: i engages k N, ( t [t i k, ti k + δ min] and x i (t i k ) x j(t i k ) d 1), (7) i reciprocates h N, t [t j h, tj h + δ min] and x i (t) x j (t) d 0 and k N, t i k [tj h δ max, t j h ] and x i(t i k ) x j(t i k ) d 1. Remark 2: Condition (7) can be easily implemented. To implement Condition (8), i has to keep in memory the last (8) if k N b ik(t)(x k (t) x i (t)) µ and a ij (t) = b ij (t) otherwise. Since b ik (t) = 0 whenever x k (t) x i (t) < d 0, a ij is upper bounded and thus is a non-negative measurable function, summable on bounded intervals of R +. Moreover, since (t) = max i,j N x i (t) x j (t) is clearly nonincreasing, it follows from the definition of a ij (t) that ( ) µ a ij (t) b ij (t) min n (0), 1, (11) where (0) is the initial group diameter. In order to apply Theorem 2, we now show that the system under intermittent ultrasonic communication described above satisfies Assumption 3 with ( ) δmin µ ε = min n (0), δ min and T = 2δ max. Let t 0 such that a ij (t) > 0. Then, b ij (t) > 0 and at least one among Conditions (7) and (8) is satisfied. Suppose first that Condition (7) is satisfied and denote by k the integer such that t [t i k, ti k + δ min]. Clearly, Condition (7) also holds for every s [t i k, ti k + δ min]. We set t ij = t i k and t ij = t i k + 2δ max t i k + δ min. Clearly, there holds t [t ij, t ij ], and t ij t ij 2δ max = T, so that Conditions (a) and (b) of Assumption 3 hold. Moreover, the non-negativity of a ij implies that tij t ij a ij (s)ds t i k +δmin t i k a ij (s)ds ( ) µ t i min n (0), 1 k +δmin t i k ( ) δmin µ = min n (0), δ min = ε, b ij (s)ds

7 where we have used (11) and the fact that b ij (s) = 1 for all s [t i k, ti k +δ min] since we have seen that Condition (7) holds for those values. There remains to prove that t ij a t ji (s)ds ij ε. Since t j h+1 tj h δ max for all h N and t i 0 δ max, there exists h N such that t j h [ti k, ti k + δ max], and thus [t j h, tj h + δ max] [t i k, ti k + 2δ max] = [t ij, t ij ]. We show that the reciprocate Condition (8) is satisfied for every s [t j h, tj h + δ max ]. The second part of the condition directly follows from t j h [ti k, ti k + δ max]. For the first one, observe that ẋ i µ (and the same holds for j), and that x i (t k ) x j (t k ) d 1 by assumption. Therefore, for any time s [t j h, tj h + δ max] [t i k, ti k + 2δ max], we have x i (s) x j (s) x i (t k ) x j (t k ) 4µδ max d 1 (d 1 d 0 ) = d 0 for every s [t j h, tj h + δ max], where we have used (9). As a consequence, the firsart of Condition (8) also holds, implying that b ij (s) = 1 for every s [t j h, tj h + δ max]. We get again tij t ij ( ) µ t j a ji (s)ds min n (0), 1 h +δmin b ij (s)ds = ε, t j h which establishes that Assumption 3 holds in that case. Suppose now that a ij (t) > 0 because Condition (8) is satisfied at t for i, j. Then one can easily verify that Condition (7) was satisfied for j, i for all s [t k j, tk j + δ min] for some t k j [t δ max, t], and an argument symmetric to that we have developed above shows that Assumption 3 also holds. Since the weights a ij (t) are upper-bounded, applying Theorem 2 (or more precisely its direct extension to R 2, see Remark 1) shows that (i) the system converges: x i = lim t x i (t) exists for every i, and (ii) x i x j only if a 0 ij (t)dt <. To conclude the proof, suppose, to obtain a contradiction, that lim t (t) > d 1, and thus that x i x j > d 1 for some i, j. The continuity of x implies that x i (t) x j (t) > d 1 for all t > s for some s, and in particular for all t i k > s. It follows then from the engage rule (7) that b ij (t) would be set to 1 on infinitely many time intervals of length at least δ min. Besides, it follows from (11) that a ij and b ij remain within a bounded ratio, so that we would have a 0 ij (t)dt =. However, we have seen that x i x j only if a 0 ij (t)dt <, so there should hold x i = x j, in contradiction with our hypothesis. We have thus lim t (t) d 1. Note that it is actually possible to have the robots converging to final positions within distances smaller than the d 1 from Proposition 4 from each other. This can be achieved by decreasing their maximal speed µ and the distance d 1 when approaching convergence. Such more evolved control laws are however out of the scope of this section, where our goal was to demonstrate the use of our results from Section II. A. Proof of Theorem 1 V. PROOFS Before we prove Theorem 1, we provide several intermediate results. Our proof uses the following result on cut-balance discrete-time consensus systems. This result is a special case of Theorem 1 in [22] restricted to deterministic systems. Theorem 5: Let y : N R n be a solution to n y i (p + 1) = b ij (p)y j (p), (12) j=1 where b ij (p) 0 and n j=1 b ij(p) = 1. Suppose that the following assumptions hold: a) Lower bound on diagonal coefficients: There exists a β > 0 such that b ii (p) β for all i, p. b) Cut balance: There exists a K > 0 such that for every p and non-empty proper subset S of N, there holds b ij (p) K b ji (p). (13) i S,j S i S,j S Then, yi = lim p y i (p) exists for every i. Moreover, let G = (N, E ) be a directed graph where (j, i) E if p=0 b ij(p) = +. There is a path from i to j in G if and only if there is a path from j to i, and in that case there holds yi = y j. Unlike certain results pre-dating those in [22] (e.g. Theorem 2 in [31]), Theorem 5 does not require the existence of a uniform lower bound on the positive coefficients b ij, that is, the existence of a β such that b ij (p) > 0 b ij (p) β. This seemingly minor difference is actually essential for our purpose, as there is in general no such uniform lower bound in the context of our proof. To apply Theorem 5, we focus on the values taken by the states at times. Remember that the sequence of times defines the intervals over which the integral reciprocity is satisfied. Lemma 6: The sequence of states (x( )) can be written as the trajectory of the discrete-time consensus system obtained by sampling (3) x i (+1 ) = j N φ ij (p) x j ( ), (14) where the weights φ ij (p) are non-negative and satisfy j N φ ij(p) = 1. This sampled system always exists and is unique for given weights a ij (t) and sampling times. The weights φ ij (p) are independent of states x(t). In particular, if x j ( ) = 1 for j S and x k ( ) = 0 for k / S, for some S N, there holds φ ij (p) = x i (+1 ). (15) j S

8 Remark 3: The equality (15) provides a way of computing or bounding certain sums of the weights φ ij (p) by considering the evolution of the systems starting from artificial states, where x j ( ) = 1 for some agents and x k ( ) = 0 for the others. Note that these artificial states are only a formal tool to compute weights φ ij (p), and their use does not result in any loss of generality. Proof: Denote by Φ(t, T ) the fundamental matrix of the linear dynamics (3) which is uniquely defined [32] by x(t ) = Φ(t, T )x(t). We define φ ij (p) as the ij-th coefficient of matrix Φ(, +1 ). So, the φ ij (p) are unique and equation (14) is satisfied. Moreover, for given weights a ij (t), the matrix Φ(t, T ) is independent of the state x(t) and so are the weights φ ij (p). So if we assume artificial states x j ( ) = 1 for j S and x k ( ) = 0 for k / S, we obtain (15) from equation (14). And since system (3) preserves the nonnegativity of the states, it follows from equation (15) applied to S = {j} that φ ij (p) 0 for every i, j, p. Finally, we can use the Peano-Baker formula [33] to show that j N φ ij(p) = 1 : the formula gives Φ(t, T ) as the limit of a recursive series with Φ(t, T ) = lim n M n(t ) M 0 (τ) = I and M n+1 (τ) = I τ t L(s)M n (s)ds, where I is the identity matrix and L(s) the Laplacian matrix of A(s) = (a ij (s)), i.e. with diagonal elements equal to j N a ij(s) and off-diagonal elements equal to a ij (s). Since L 1 = 0 with 1 the vector of all ones, we have from the recursive equation that M n 1 = 1 and by continuity, Φ(t, T ) 1 = 1, thus j N φ ij(p) = 1. To obtain more insight on the discrete-time weights φ ij, we give the nexroposition which bounds the discretetime weights φ ij using the continuous-time weights a ij. For concision, we will omit the explicit reference to time in a ij when the contexrevents any ambiguity. Proposition 7: Under the uniform bound Assumption 2, we have for all proper subset of agents S and all p 0, G i S j / S a ij (t)dt i S j / S with G = exp( 2nM)/n. that φ ij (p) n i S j / S a ij (t)dt, Proof: Le N and S a proper subset of N. We assume i S, x i ( ) = 0 and j S, x j ( ) = 1, (16) as suggested in Remark 3. We first show the left inequality. We show that starting from state (16) at time no agent j / S can be arbitrarily close to 0 at time +1. We have for all τ [, +1 ], x j (τ) = x j ( ) + x j ( ) τ a jk (t) (x k (t) x j (t))dt k N τ a jk (t) x j (t)dt, k N where we used x k (t) 0, k N. We use Gronwall s inequality [34] and Assumption 2 (upper bound on interactions on each [, +1 ]) to obtain j S x j (τ) e nm, τ [, +1 ]. (17) We will use the bound (17) to establish that, due to attraction from agents not in S, the states x i (+1 ) of the agents i S at time +1 are all at least a certain positive distance from 0. Let now h S be such that a hj (t)dt = max i S a ij (t)dt, i.e. agent h is the element in S receiving the highest influence from the rest of the group. There holds a hj (t)dt 1 n i S a ij (t)dt. (18) Using the non-negativity x i 0 for i S and the lower bound (17) on x j for j / S, we have for all τ [, +1 ], x h (τ) = x h () + τ x h () + τ e nm τ τ a hj (x j x h )dt + τ a hj x jdt a hj dt τ k N k N a hk (x k x h )dt k N a hk x h dt. a hk x h dt, where we have also used x h ( ) = 0. It follows then from Gronwall s inequality that x h (+1 ) e nm e τ k N a hkds a hj dτ. (19) The expression inside the exponential can be bounded using Assumption 2 (upper bound) together with (18). We have then x h (+1 ) 1 n e 2nM a ij dt. (20) Moreover, φ ij (p) 0 and equation (15) yield φ hj (p) = x h (+1 ). i S φ ij (p) We conclude the firsart of the proof combining the two previous equations.

9 We now turn to the second inequality. For t [, +1 ], let x S (t) = max i S x i (t) be the largest value in x at time t, and m(t) the index of (one of) the agents holding that largest value at time t. It was shown in [17, Proposition 2] that t x S (t) = x S ( ) + a m(τ)k (x k x S )dτ. k N Notice that the choice of state (16) implies x S ( ) = 0. Since x j 1 for j / S and x i x S 1 for i S, we have x S (t) t t Gronwall s inequality yields then x S (+1 ) 1 exp from which we conclude φ ij (p) = i S a m(τ)j (1 x S )dτ a ij (1 x S )dτ. a ij dt a ij dt, (21) x i (+1 ) n x S (+1 ). The previous proposition serves to transpose the cut-balance assumption provided in Theorem 1 to the discrete-time weights φ ij (p). In particular, we can now show that the weights φ ij (p) satisfy the condition of Theorem 5. Lemma 8: Under the non-instantaneous reciprocity Assumption 1 and the uniform bound Assumption 2, the following properties hold : a) There exists a uniform lower bound β > 0 on diagonal elements: φ ii (p) β, for all p and i. b) The weights φ ij (p) satisfy the cut balance assumption (13) for some K determined by the constants K and M of Assumptions 1 and 2. Note that (b) would in general not be true for certain stronger forms or reciprocity. In particular, +1 a ij (t)dt K +1 a ji (t)dt does not imply the existence of a K such that φ ij (p) K φ ji (p). Proof: The proof of (a) is as follows. For arbitrary k N and p, we suppose that x k ( ) = 0, and x i ( ) = 1 for every i k. A reasoning similar to that leading to (17) in the proof of Proposition 7 shows that x k (+1 ) 1 e nm. It follows then from Lemma 6 applied to S = {k} that j N,j k φ kj(p) 1 e nm, and thus that φ kk (p) e nm, which establishes (a). We now prove statement (b). The first inequality of Proposition 7 applied to S states that φ ij (p) n i S i S a ij (t)dt. (22) On the other hand, the second inequality of the same proposition applied to N \ S yields j S G i S which can be rewritten as j S G i S j S a ij (t)dt φ ij (p), i S a ji (t)dt φ ji (p). (23) i S Statement (b) with K = n/g follows then directly from Assumption 1 and the inequalities (22) and (23). of Theorem 1: Since Lemma 8 is satisfied, Theorem 5 applies. Thus, the sequence x( ) converges to some x R n. Denote by G = (N, E ) the directed graph where (j, i) E if p=0 ϕ ij(p) = +. Theorem 5 implies that the (strongly) connected components of G are entirely disconnected from each other (i.e. the different strongly connected components are not joined by any edge), and that x i = x j if i and j belong to the same component. A local consensus takes thus place on each such component for the discrete-time system y(p) = x( ). Now, the graph G of persistent interactions defined in the statement of Theorem 1 is in general different from G. However, as a direct corollary of Proposition 7, we have that G and G have the same connected components. It remains to show that the continuous-time function x(t) converges to the same x as the sequence (x( )). We prove this by showing that for each set of node S N inducing strongly connected component in G (or in G ), both the minimum x S (t) = min i S x i (t) and the maximum x S (t) = max i S x i (t) converge to the same value. Since S is a connected component of G, the integral influence a 0 ij (t)dt is finite. For any µ > 0, there exists some T µ 0 such that T µ a ij (t)dt < µ. We denote again by m(τ) the index of (one of) the agents with the largest value, so that x m(τ) (τ) = x S (τ). Then, for

10 all v > u T µ, we have x S (v) x S (u) v u v u u µ (0), a m(τ)j (x j (τ) x m(τ) (τ))dτ a m(τ)j x j (τ) x m(τ) (τ) dτ v T µ a m(τ)j x j (τ) x i (τ) dτ a ij x j (τ) x i (τ) dτ where (0) = max i N x i (0) min i N x i (0). This shows that the x S form a Cauchy sequence and thus that they converge. Since the sub-sequence ( x S ( )) converges to x i for some i S, there holds lim t + x S (t) = x i. We can apply the same reasoning to show that x S also converges lim p + xs () = x i. We conclude that for all i S, x i(t) converge to the same limit x i. B. Proof of Theorem 2 For concision, we say that an unordered pair {i, j} = {j, i} is active over an interval I if t I a ij(t)dt ε and t I a ji(t)dt ε. We let T = max i,j T ij. The following observation compiles some properties that follow directly from the definition of being active. Observation 1: a) Consider two intervals I, J with I J. If {i, j} is active over I, it is active over J. b) Under Assumption 3, if a ij (t) > 0, then {i, j} is active over [t T, t + T ]. c) Under Assumption 3, if {i, j} is not active over [t, t ], then a ij (s) = a ji (s) = 0 for all s [t + T, t T ]. The next Proposition is the core of our proof, it allows building a sequence of times t k valid for Assumptions 1 and 2. Proposition 9: Suppose that Assumption 3 is satisfied, and let M = M 1 + M 2 where M 1, M 2 are any constant satisfying M 2 > n(n 1)T + T and M 1 M 2 + T. (24) Then, there exists a sequence t 0, t 1,... with t 0 = 0, and t k+1 t k M, such that the following condition A k holds for every k. A k : for all i, j N distinct, Condition A1 k or Condition A2 k holds, with [ A1k : t [t k, t k+1 ], a ij (t) = 0, A2 k : {i, j} is active over [t k, t k+1 ]. The proof of Proposition 9 is based on an induction that makes use of the intermediate Condition B k : B k : for all i, j N distinct, Condition B1 k or Condition B2 k holds, with [ B1k : t [t k, t k + T ], a ij(t) = 0, B2 k : {i, j}is active over [t k, t k + M 1 ]. The next Lemma treats the initialization of the induction. Lemma 10: Suppose that a ij (t) = 0 for all t 0, and let t 0 = 0. Then Condition B 0 holds. Proof: Suppose that B1 0 does not hold, i.e. a ij (t) > 0 for some t [0, T ]. Then, Observation 1(b) implies that {i, j} is active over [t T, t+t ] which by Observation 1(a) implies that {i, j} is active over [min(0, t T ), 2T ]. Since a ij (t ) = 0 for all t < 0, it follows then that {i, j} is active over [0, 2T ] and since M 1 M 2 + T > n(n 1) + 2T 2T holds according to equation (24), {i, j} is active over [0, M 1 ] (again thanks to Observation 1(a)). Thus B2 0 holds and so does B 0. Proposition 11 (Inductive case): If there exists t k such that Condition B k holds, then there exists t k+1 t k + M 1 + M 2 for which Conditions A k and B k+1 hold. Proof: Let us introduce the following two sets of unordered pairs of agents for every t [t k, t k + M 1 + M 2 ]. R t {{i, j} i, j N, i j}: set of pairs {i, j} which are active over time interval [t k, t]. V t {{i, j} i, j N, i j}: set of pairs {i, j} for which a ij (t ) = a ji (t ) = 0 for all t [t, t k +M 1 +M 2 ], i.e. set of pairs where there is no interaction between t and t k + M 1 + M 2. Note that for all t k t s t k + M 1 + M 2, there holds R t R s and V t V s, so that these sets are non-decreasing with time. The non-decrease of V s is trivial while that of R s follows directly from Observation 1(a). For given k and t k, we now build a t k+1 using Algorithm 1, which we prove to always successfully terminate. We first prove that Claims 1 and 2 hold, and then show how this implies the statement of Proposition 11. Algorithm 1 Selection of t k+1 Require: t k satisfies B k Set t = t k + M 1 Switch over cases 0 to 3 : Case 0: t t k + M 1 + M 2 T : STOP, FAILURE Case 1: Conditions A k and B k+1 are satisfied taking t k+1 = t. STOP, SUCCESS. Case 2: Condition A k does not hold taking t k+1 = t. Claim 1: There exists {i, j} R t belonging to R t+t. Then set t = t + T and iterate. Case 3: Condition B k+1 does not hold taking t k+1 = t. Claim 2: There exists {i, j} / V t belonging to V t+t. Then set t = t + T and iterate. Claim 1:

11 In Case 2 of Algorithm 1, Condition A k does not hold. Thus, there exists {i, j} such that A1 k does not hold, i.e. a ij (t) > 0 for some t [t k, t], and A2 k does not hold, i.e. {i, j} is not active over [t k, t]. The fact that A2 k does not hold implies by definition of R t that {i, j} R t. Let us now show that t [ t T, t]. The fact that A2 k does not hold together with Observation 1(c) implies that a ij (t ) = 0 for all t [t k + T, t T ]. So either t [t k, t k + T ] or t [ t T, t]. We show that the first case is impossible: Since {i, j} is not active over [t k, t], and t t k + M 1, the negation of Observation 1(a) implies that {i, j} is not active over [t k, t k + M 1 ], and thus that B2 k does not hold. However, we know by hypothesis that B k holds. Thus, B1 k holds : t / [t k, t k + T ], and as a consequence, t [ t T, t]. It follows then from Observation 1(b) that {i, j} is active over [t T, t + T ] and from Observation 1(a) that it is active over [ t 2T, t + T ]. Since t t k + M 1 > t k + 2T, the pair {i, j} is active over [t k, t + T ], so that : {i, j} R t+t, which achieves proving claim 1. Claim 2: Since Condition B k+1 does not hold, there is a pair {i, j} that satisfies neither B1 k+1 nor B2 k+1, that is, one for which a ij (t) > 0 for some t [ t, t + T ], and for which {i, j} is not active over [ t, t + M 1 ]. Since t t k + M 1 + M 2 T for otherwise we would have been in case 0, the t [ t, t + T ] for which a ij (t) > 0 lies in [ t, t k + M 1 + M 2 ], which implies that {i, j} V t by definition of V t. We now show that it belongs to V t+t By Observation 1(c), since {i, j} is not active over [ t, t + M 1 ], there holds a ij (t ) = a ji (t ) = 0 for all t [ t + T, t + M 1 T ]. Also, by definition, t t k +M 1 and M 1 M 2 +T, so that t + M 1 T t k + M 1 + M 1 T t k + M 1 + M 2. Thus, a ij (t ) = a ji (t ) = 0 for all t [ t + T, t k + M 1 + M 2 ] and {i, j} V t+t. To complete the proof of Proposition 11, we show that Algorithm 1 stops and that when it does, the choice t k+1 = t satisfies Conditions A k and B k+1. Indeed, at every iteration, either the algorithm stops, or case 2 or 3 applies and t increases by T. In case 2, it follows from Claim 1 that the size of R t increases by at least 1, and in case 3, it follows from Claim 2 that the size of V t increases by at least 1. Since both R t and V t are sets of unordered pairs of distinct nodes, their size cannot exceed n(n 1)/2. Therefore, cases 2 and 3 do not apply more than n(n 1)/2 times each. In particular, case 0 or 1 must apply for some t t k + M 1 + n(n 1)T (remembering that t is initially t k + M 1 ), at which stage the algorithm stops. Now since according to equation (24), M 2 > n(n 1)T + T, case 0 or 1 apply for t < t k + M 1 + M 2 T, so that case 1 must apply first, and the algorithm produces thus a t k+1 satisfying t k+1 t k M 1 + M 2 for which A k and B k+1 are satisfied. The proof of Proposition 9 is then a direct consequence of Lemma 10 and Proposition 11. of Theorem 2: We show that the sequence t k built in Proposition 9 is valid for Assumptions 1 and 2. Observe first that since the a ij (t) are assumed to be uniformly bounded above and since t k+1 t k M, there clearly holds tk+1 t k a ij (t)dt < M for some M and all i, j and t k, so that Assumption 2 holds. Moreover, it follows from Proposition 9 that either a ij (t) = a ji (t) = 0 for all t [t k, t k+1 ], or tk+1 t k a ij (t)dt ε and t k+1 t k a ji (t)dt ε. Since the latter integrals are also bounded by M, there holds tk+1 a ij (t)dt M t k ε tk+1 which implies that Assumption 1 also holds. t k VI. CONCLUSION a ji (t)dt, In this paper, we have developed a new reciprocity-based convergence result for continuous-time consensus systems. This result is based on a new assumption which allows for non-instantaneous reciprocity: Unlike previous studies, we only assume that reciprocity takes place on average over contiguous time intervals. This assumption is appropriate for various classes of systems (including classes of broadcasting, gossiping, and self-triggered system where communication is not necessarily synchronous). We have shown that integral reciprocity (Assumption 1) alone is not a sufficient condition for convergence. In particular, it does not forbid certain oscillatory behaviors. We have therefore proposed a companion assumption (Assumption 2) stating that quantity of interactions taking place in the intervals over which reciprocity occurs should be uniformly bounded. Under these two assumptions, we have proven that the trajectory of the system always converges, though not necessarily to consensus. Moreover, consensus takes place among agents in clusters of the graph of persistent interactions. We have also particularized our result to a class of systems satisfying a local pairwise form of reciprocity. Apart from the integral reciprocity and uniform bound, our result does not make any assumption on the interactions between agents, and allows in particular for arbitrary long periods during which the system is idle. As a consequence, it is in general impossible to give absolute bounds on the speed of convergence under the assumptions that we have made. However, future works could relate the speed of convergence to the amount of interactions having taken place in the system, as in [19]. ACKNOWLEDGEMENT The authors wish to thank Behrouz Touri for his help about Theorem 5. REFERENCES [1] A. Jadbabaie, J. Lin, and A. S. Morse, Coordination of groups of mobile autonomous agents using nearest neighbor rules, IEEE Transactions on Automatic Control, vol. 48, no. 6, pp , 2003.

On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity

On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity 2586 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 11, NOVEMBER 2009 On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity Vincent D. Blondel, Julien M. Hendrickx, and John N.

More information

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity Vincent D. Blondel, Julien M. Hendricx and John N. Tsitsilis July 24, 2009 Abstract

More information

Symmetric Continuum Opinion Dynamics: Convergence, but sometimes only in Distribution

Symmetric Continuum Opinion Dynamics: Convergence, but sometimes only in Distribution Symmetric Continuum Opinion Dynamics: Convergence, but sometimes only in Distribution Julien M. Hendrickx Alex Olshevsky Abstract This paper investigates the asymptotic behavior of some common opinion

More information

Convergence in Multiagent Coordination, Consensus, and Flocking

Convergence in Multiagent Coordination, Consensus, and Flocking Convergence in Multiagent Coordination, Consensus, and Flocking Vincent D. Blondel, Julien M. Hendrickx, Alex Olshevsky, and John N. Tsitsiklis Abstract We discuss an old distributed algorithm for reaching

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

arxiv: v3 [cs.dc] 31 May 2017

arxiv: v3 [cs.dc] 31 May 2017 Push sum with transmission failures Balázs Gerencsér Julien M. Hendrickx June 1, 2017 arxiv:1504.08193v3 [cs.dc] 31 May 2017 Abstract The push-sum algorithm allows distributed computing of the average

More information

Convergence in Multiagent Coordination, Consensus, and Flocking

Convergence in Multiagent Coordination, Consensus, and Flocking Convergence in Multiagent Coordination, Consensus, and Flocking Vincent D. Blondel, Julien M. Hendrickx, Alex Olshevsky, and John N. Tsitsiklis Abstract We discuss an old distributed algorithm for reaching

More information

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time Hegselmann-Krause Dynamics: An Upper Bound on Termination Time B. Touri Coordinated Science Laboratory University of Illinois Urbana, IL 680 touri@illinois.edu A. Nedić Industrial and Enterprise Systems

More information

On Backward Product of Stochastic Matrices

On Backward Product of Stochastic Matrices On Backward Product of Stochastic Matrices Behrouz Touri and Angelia Nedić 1 Abstract We study the ergodicity of backward product of stochastic and doubly stochastic matrices by introducing the concept

More information

Convergence Rate for Consensus with Delays

Convergence Rate for Consensus with Delays Convergence Rate for Consensus with Delays Angelia Nedić and Asuman Ozdaglar October 8, 2007 Abstract We study the problem of reaching a consensus in the values of a distributed system of agents with time-varying

More information

The Multi-Agent Rendezvous Problem - The Asynchronous Case

The Multi-Agent Rendezvous Problem - The Asynchronous Case 43rd IEEE Conference on Decision and Control December 14-17, 2004 Atlantis, Paradise Island, Bahamas WeB03.3 The Multi-Agent Rendezvous Problem - The Asynchronous Case J. Lin and A.S. Morse Yale University

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach M. Cao Yale Univesity A. S. Morse Yale University B. D. O. Anderson Australia National University and National ICT Australia

More information

Consensus, Flocking and Opinion Dynamics

Consensus, Flocking and Opinion Dynamics Consensus, Flocking and Opinion Dynamics Antoine Girard Laboratoire Jean Kuntzmann, Université de Grenoble antoine.girard@imag.fr International Summer School of Automatic Control GIPSA Lab, Grenoble, France,

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization Global stabilization of feedforward systems with exponentially unstable Jacobian linearization F Grognard, R Sepulchre, G Bastin Center for Systems Engineering and Applied Mechanics Université catholique

More information

Consensus Protocols for Networks of Dynamic Agents

Consensus Protocols for Networks of Dynamic Agents Consensus Protocols for Networks of Dynamic Agents Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 e-mail: {olfati,murray}@cds.caltech.edu

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

Australian National University WORKSHOP ON SYSTEMS AND CONTROL

Australian National University WORKSHOP ON SYSTEMS AND CONTROL Australian National University WORKSHOP ON SYSTEMS AND CONTROL Canberra, AU December 7, 2017 Australian National University WORKSHOP ON SYSTEMS AND CONTROL A Distributed Algorithm for Finding a Common

More information

The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case

The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case J. Lin 800 Phillips Road MS:0128-30E Webster, NY 14580-90701 jie.lin@xeroxlabs.com 585-422-4305 A. S. Morse PO Box 208267 Yale University

More information

Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events

Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events M. Cao Yale Univesity A. S. Morse Yale University B. D. O. Anderson Australia

More information

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively); STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend

More information

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY Jrl Syst Sci & Complexity (2009) 22: 722 731 MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY Yiguang HONG Xiaoli WANG Received: 11 May 2009 / Revised: 16 June 2009 c 2009

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Robust Connectivity Analysis for Multi-Agent Systems

Robust Connectivity Analysis for Multi-Agent Systems Robust Connectivity Analysis for Multi-Agent Systems Dimitris Boskos and Dimos V. Dimarogonas Abstract In this paper we provide a decentralized robust control approach, which guarantees that connectivity

More information

An event-triggered distributed primal-dual algorithm for Network Utility Maximization

An event-triggered distributed primal-dual algorithm for Network Utility Maximization An event-triggered distributed primal-dual algorithm for Network Utility Maximization Pu Wan and Michael D. Lemmon Abstract Many problems associated with networked systems can be formulated as network

More information

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Cameron Nowzari Jorge Cortés Abstract This paper studies a distributed event-triggered communication and

More information

Gradient Clock Synchronization

Gradient Clock Synchronization Noname manuscript No. (will be inserted by the editor) Rui Fan Nancy Lynch Gradient Clock Synchronization the date of receipt and acceptance should be inserted later Abstract We introduce the distributed

More information

Complex Laplacians and Applications in Multi-Agent Systems

Complex Laplacians and Applications in Multi-Agent Systems 1 Complex Laplacians and Applications in Multi-Agent Systems Jiu-Gang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complex-valued Laplacians have been shown to be powerful

More information

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm Proposition : B(K) is complete. f = sup f(x) x K Proof.

More information

Valency Arguments CHAPTER7

Valency Arguments CHAPTER7 CHAPTER7 Valency Arguments In a valency argument, configurations are classified as either univalent or multivalent. Starting from a univalent configuration, all terminating executions (from some class)

More information

Output Input Stability and Minimum-Phase Nonlinear Systems

Output Input Stability and Minimum-Phase Nonlinear Systems 422 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 3, MARCH 2002 Output Input Stability and Minimum-Phase Nonlinear Systems Daniel Liberzon, Member, IEEE, A. Stephen Morse, Fellow, IEEE, and Eduardo

More information

Feedback stabilisation with positive control of dissipative compartmental systems

Feedback stabilisation with positive control of dissipative compartmental systems Feedback stabilisation with positive control of dissipative compartmental systems G. Bastin and A. Provost Centre for Systems Engineering and Applied Mechanics (CESAME Université Catholique de Louvain

More information

arxiv: v3 [math.ds] 22 Feb 2012

arxiv: v3 [math.ds] 22 Feb 2012 Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

Discrete-time Consensus Filters on Directed Switching Graphs

Discrete-time Consensus Filters on Directed Switching Graphs 214 11th IEEE International Conference on Control & Automation (ICCA) June 18-2, 214. Taichung, Taiwan Discrete-time Consensus Filters on Directed Switching Graphs Shuai Li and Yi Guo Abstract We consider

More information

Resilient Asymptotic Consensus in Robust Networks

Resilient Asymptotic Consensus in Robust Networks Resilient Asymptotic Consensus in Robust Networks Heath J. LeBlanc, Member, IEEE, Haotian Zhang, Student Member, IEEE, Xenofon Koutsoukos, Senior Member, IEEE, Shreyas Sundaram, Member, IEEE Abstract This

More information

Automorphism groups of wreath product digraphs

Automorphism groups of wreath product digraphs Automorphism groups of wreath product digraphs Edward Dobson Department of Mathematics and Statistics Mississippi State University PO Drawer MA Mississippi State, MS 39762 USA dobson@math.msstate.edu Joy

More information

E-Companion to The Evolution of Beliefs over Signed Social Networks

E-Companion to The Evolution of Beliefs over Signed Social Networks OPERATIONS RESEARCH INFORMS E-Companion to The Evolution of Beliefs over Signed Social Networks Guodong Shi Research School of Engineering, CECS, The Australian National University, Canberra ACT 000, Australia

More information

Modeling and Stability Analysis of a Communication Network System

Modeling and Stability Analysis of a Communication Network System Modeling and Stability Analysis of a Communication Network System Zvi Retchkiman Königsberg Instituto Politecnico Nacional e-mail: mzvi@cic.ipn.mx Abstract In this work, the modeling and stability problem

More information

Fault-Tolerant Consensus

Fault-Tolerant Consensus Fault-Tolerant Consensus CS556 - Panagiota Fatourou 1 Assumptions Consensus Denote by f the maximum number of processes that may fail. We call the system f-resilient Description of the Problem Each process

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Distributed Optimization over Random Networks

Distributed Optimization over Random Networks Distributed Optimization over Random Networks Ilan Lobel and Asu Ozdaglar Allerton Conference September 2008 Operations Research Center and Electrical Engineering & Computer Science Massachusetts Institute

More information

Tracking control for multi-agent consensus with an active leader and variable topology

Tracking control for multi-agent consensus with an active leader and variable topology Automatica 42 (2006) 1177 1182 wwwelseviercom/locate/automatica Brief paper Tracking control for multi-agent consensus with an active leader and variable topology Yiguang Hong a,, Jiangping Hu a, Linxin

More information

Distributed Receding Horizon Control of Cost Coupled Systems

Distributed Receding Horizon Control of Cost Coupled Systems Distributed Receding Horizon Control of Cost Coupled Systems William B. Dunbar Abstract This paper considers the problem of distributed control of dynamically decoupled systems that are subject to decoupled

More information

32 Divisibility Theory in Integral Domains

32 Divisibility Theory in Integral Domains 3 Divisibility Theory in Integral Domains As we have already mentioned, the ring of integers is the prototype of integral domains. There is a divisibility relation on * : an integer b is said to be divisible

More information

2. Prime and Maximal Ideals

2. Prime and Maximal Ideals 18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Isomorphisms between pattern classes

Isomorphisms between pattern classes Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.

More information

Failure detectors Introduction CHAPTER

Failure detectors Introduction CHAPTER CHAPTER 15 Failure detectors 15.1 Introduction This chapter deals with the design of fault-tolerant distributed systems. It is widely known that the design and verification of fault-tolerent distributed

More information

Consensus Algorithms are Input-to-State Stable

Consensus Algorithms are Input-to-State Stable 05 American Control Conference June 8-10, 05. Portland, OR, USA WeC16.3 Consensus Algorithms are Input-to-State Stable Derek B. Kingston Wei Ren Randal W. Beard Department of Electrical and Computer Engineering

More information

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs Saeed Ahmadizadeh a, Iman Shames a, Samuel Martin b, Dragan Nešić a a Department of Electrical and Electronic Engineering, Melbourne

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Perfect matchings in highly cyclically connected regular graphs

Perfect matchings in highly cyclically connected regular graphs Perfect matchings in highly cyclically connected regular graphs arxiv:1709.08891v1 [math.co] 6 Sep 017 Robert Lukot ka Comenius University, Bratislava lukotka@dcs.fmph.uniba.sk Edita Rollová University

More information

On the Average Complexity of Brzozowski s Algorithm for Deterministic Automata with a Small Number of Final States

On the Average Complexity of Brzozowski s Algorithm for Deterministic Automata with a Small Number of Final States On the Average Complexity of Brzozowski s Algorithm for Deterministic Automata with a Small Number of Final States Sven De Felice 1 and Cyril Nicaud 2 1 LIAFA, Université Paris Diderot - Paris 7 & CNRS

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D

Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D Byung-Hun Lee and Hyo-Sung Ahn arxiv:1783591v1 [cssy] 1 Aug 17 Abstract In this paper, we propose a novel

More information

Recognizing single-peaked preferences on aggregated choice data

Recognizing single-peaked preferences on aggregated choice data Recognizing single-peaked preferences on aggregated choice data Smeulders B. KBI_1427 Recognizing Single-Peaked Preferences on Aggregated Choice Data Smeulders, B. Abstract Single-Peaked preferences play

More information

Fast Linear Iterations for Distributed Averaging 1

Fast Linear Iterations for Distributed Averaging 1 Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider

More information

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time Submitted, 009 American Control Conference (ACC) http://www.cds.caltech.edu/~murray/papers/008q_lm09b-acc.html On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time Javad Lavaei

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology

A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology Amir Amini, Amir Asif, Arash Mohammadi Electrical and Computer Engineering,, Montreal, Canada.

More information

Formation Stabilization of Multiple Agents Using Decentralized Navigation Functions

Formation Stabilization of Multiple Agents Using Decentralized Navigation Functions Formation Stabilization of Multiple Agents Using Decentralized Navigation Functions Herbert G. Tanner and Amit Kumar Mechanical Engineering Department University of New Mexico Albuquerque, NM 873- Abstract

More information

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Generalized Pigeonhole Properties of Graphs and Oriented Graphs Europ. J. Combinatorics (2002) 23, 257 274 doi:10.1006/eujc.2002.0574 Available online at http://www.idealibrary.com on Generalized Pigeonhole Properties of Graphs and Oriented Graphs ANTHONY BONATO, PETER

More information

Rao s degree sequence conjecture

Rao s degree sequence conjecture Rao s degree sequence conjecture Maria Chudnovsky 1 Columbia University, New York, NY 10027 Paul Seymour 2 Princeton University, Princeton, NJ 08544 July 31, 2009; revised December 10, 2013 1 Supported

More information

arxiv: v1 [cs.dc] 3 Oct 2011

arxiv: v1 [cs.dc] 3 Oct 2011 A Taxonomy of aemons in Self-Stabilization Swan ubois Sébastien Tixeuil arxiv:1110.0334v1 cs.c] 3 Oct 2011 Abstract We survey existing scheduling hypotheses made in the literature in self-stabilization,

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems

Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems Hai Lin Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA Panos J. Antsaklis

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

arxiv: v1 [math.fa] 14 Jul 2018

arxiv: v1 [math.fa] 14 Jul 2018 Construction of Regular Non-Atomic arxiv:180705437v1 [mathfa] 14 Jul 2018 Strictly-Positive Measures in Second-Countable Locally Compact Non-Atomic Hausdorff Spaces Abstract Jason Bentley Department of

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules

Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules University of Pennsylvania ScholarlyCommons Departmental Papers (ESE) Department of Electrical & Systems Engineering June 2003 Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor

More information

Network Flows that Solve Linear Equations

Network Flows that Solve Linear Equations Network Flows that Solve Linear Equations Guodong Shi, Brian D. O. Anderson and Uwe Helmke Abstract We study distributed network flows as solvers in continuous time for the linear algebraic equation arxiv:1510.05176v3

More information

Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays.

Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays. Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays. V. Guffens 1 and G. Bastin 2 Intelligent Systems and Networks Research

More information

arxiv: v1 [cs.gt] 10 Apr 2018

arxiv: v1 [cs.gt] 10 Apr 2018 Individual and Group Stability in Neutral Restrictions of Hedonic Games Warut Suksompong Department of Computer Science, Stanford University 353 Serra Mall, Stanford, CA 94305, USA warut@cs.stanford.edu

More information

STABILITY OF PLANAR NONLINEAR SWITCHED SYSTEMS

STABILITY OF PLANAR NONLINEAR SWITCHED SYSTEMS LABORATOIRE INORMATIQUE, SINAUX ET SYSTÈMES DE SOPHIA ANTIPOLIS UMR 6070 STABILITY O PLANAR NONLINEAR SWITCHED SYSTEMS Ugo Boscain, régoire Charlot Projet TOpModel Rapport de recherche ISRN I3S/RR 2004-07

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

Termination Problem of the APO Algorithm

Termination Problem of the APO Algorithm Termination Problem of the APO Algorithm Tal Grinshpoun, Moshe Zazon, Maxim Binshtok, and Amnon Meisels Department of Computer Science Ben-Gurion University of the Negev Beer-Sheva, Israel Abstract. Asynchronous

More information

IN THIS paper we investigate the diagnosability of stochastic

IN THIS paper we investigate the diagnosability of stochastic 476 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 4, APRIL 2005 Diagnosability of Stochastic Discrete-Event Systems David Thorsley and Demosthenis Teneketzis, Fellow, IEEE Abstract We investigate

More information

Finite-Time Consensus based Clock Synchronization by Discontinuous Control

Finite-Time Consensus based Clock Synchronization by Discontinuous Control Finite- Consensus based Clock Synchronization by Discontinuous Control Mauro Franceschelli, Alessandro Pisano, Alessandro Giua, Elio Usai Dept. of Electrical and Electronic Engineering, Univ. of Cagliari,

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

DR.RUPNATHJI( DR.RUPAK NATH )

DR.RUPNATHJI( DR.RUPAK NATH ) Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology

More information

Passivity-based Stabilization of Non-Compact Sets

Passivity-based Stabilization of Non-Compact Sets Passivity-based Stabilization of Non-Compact Sets Mohamed I. El-Hawwary and Manfredi Maggiore Abstract We investigate the stabilization of closed sets for passive nonlinear systems which are contained

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems

On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems 1 On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems O. Mason and R. Shorten Abstract We consider the problem of common linear copositive function existence for

More information

Randomized Gossip Algorithms

Randomized Gossip Algorithms 2508 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 6, JUNE 2006 Randomized Gossip Algorithms Stephen Boyd, Fellow, IEEE, Arpita Ghosh, Student Member, IEEE, Balaji Prabhakar, Member, IEEE, and Devavrat

More information

On the simultaneous diagonal stability of a pair of positive linear systems

On the simultaneous diagonal stability of a pair of positive linear systems On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

Causality and Time. The Happens-Before Relation

Causality and Time. The Happens-Before Relation Causality and Time The Happens-Before Relation Because executions are sequences of events, they induce a total order on all the events It is possible that two events by different processors do not influence

More information

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Themistoklis Charalambous, Member, IEEE, Michael G. Rabbat, Member, IEEE, Mikael Johansson, Member,

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information