Distributed Averaging with Flow Constraints

Similar documents
Decentralized Robust Control Invariance for a Network of Integrators

Fast Linear Iterations for Distributed Averaging 1

Convergence Rate for Consensus with Delays

Distributed Receding Horizon Control of Cost Coupled Systems

On Quantized Consensus by Means of Gossip Algorithm Part I: Convergence Proof

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Consensus Protocols for Networks of Dynamic Agents

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

MODEL PREDICTIVE CONTROL SCHEMES FOR CONSENSUS IN MULTI-AGENT SYSTEMS WITH INTEGRATOR DYNAMICS AND TIME-VARYING COMMUNICATION

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

Discrete-time Consensus Filters on Directed Switching Graphs

Complex Laplacians and Applications in Multi-Agent Systems

Connections Between Cooperative Control and Potential Games Illustrated on the Consensus Problem

Optimal Network Topology Design in Multi-Agent Systems for Efficient Average Consensus

Notes on averaging over acyclic digraphs and discrete coverage control

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY

Consensus Analysis of Networked Multi-agent Systems

Stability Analysis of Stochastically Varying Formations of Dynamic Agents

Consensus-Based Distributed Optimization with Malicious Nodes

Distributed Optimization over Networks Gossip-Based Algorithms

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Consensus seeking on moving neighborhood model of random sector graphs

Exact Consensus Controllability of Multi-agent Linear Systems

Constrained Consensus and Optimization in Multi-Agent Networks

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time

Distributed Optimization over Random Networks

A Graph-Theoretic Characterization of Controllability for Multi-agent Systems

Consensus Problems on Small World Graphs: A Structural Study

Fastest Distributed Consensus Averaging Problem on. Chain of Rhombus Networks

Research on Consistency Problem of Network Multi-agent Car System with State Predictor

Consensus Stabilizability and Exact Consensus Controllability of Multi-agent Linear Systems

1520 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 9, SEPTEMBER Reza Olfati-Saber, Member, IEEE, and Richard M. Murray, Member, IEEE

Almost Sure Convergence to Consensus in Markovian Random Graphs

Load balancing on networks with gossip-based distributed algorithms

Load balancing on networks with gossip-based distributed algorithms

Consensus Problem in Multi-Agent Systems with Communication Channel Constraint on Signal Amplitude

Obtaining Consensus of Multi-agent Linear Dynamic Systems

Quantized Average Consensus on Gossip Digraphs

Consensus Tracking for Multi-Agent Systems with Nonlinear Dynamics under Fixed Communication Topologies

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

On Distributed Averaging Algorithms and Quantization Effects

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications

Formation Control of Nonholonomic Mobile Robots

EE C128 / ME C134 Feedback Control Systems

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective

Maximum Capability of Feedback Control for Network Systems

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Max-Consensus in a Max-Plus Algebraic Setting: The Case of Fixed Communication Topologies

Routing. Topics: 6.976/ESD.937 1

Consensus of Information Under Dynamically Changing Interaction Topologies

The abundance of embedded computational

Hamiltonian Quantized Gossip

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time

A State-Space Approach to Control of Interconnected Systems

An approximate dual subgradient algorithm for multi-agent non-convex optimization

Formation Stabilization of Multiple Agents Using Decentralized Navigation Functions

Consensus Seeking Using Multi-Hop Relay Protocol

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Optimal Linear Iterations for Distributed Agreement

Convergence in Multiagent Coordination, Consensus, and Flocking

Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren, Member, IEEE

Graph Theoretic Methods in the Stability of Vehicle Formations

Finite-Time Consensus based Clock Synchronization by Discontinuous Control

Multi-Pendulum Synchronization Using Constrained Agreement Protocols

A Distributed Newton Method for Network Utility Maximization

MOST control systems are designed under the assumption

Structural Consensus Controllability of Singular Multi-agent Linear Dynamic Systems

Convergence in Multiagent Coordination, Consensus, and Flocking

Decentralized Formation of Random Regular Graphs for Robust Multi-Agent Networks

Consensus of Hybrid Multi-agent Systems

Distributed Consensus over Network with Noisy Links

Tracking control for multi-agent consensus with an active leader and variable topology

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

TOPOLOGY FOR GLOBAL AVERAGE CONSENSUS. Soummya Kar and José M. F. Moura

Research Article H Consensus for Discrete-Time Multiagent Systems

Convex Optimization of Graph Laplacian Eigenvalues

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies

Information Consensus and its Application in Multi-vehicle Cooperative Control

Active Passive Networked Multiagent Systems

A Note to Robustness Analysis of the Hybrid Consensus Protocols

ON SEPARATION PRINCIPLE FOR THE DISTRIBUTED ESTIMATION AND CONTROL OF FORMATION FLYING SPACECRAFT

Numerical Schemes from the Perspective of Consensus

Notes on averaging over acyclic digraphs and discrete coverage control

A gossip-like distributed optimization algorithm for reactive power flow control

The Multi-Agent Rendezvous Problem - The Asynchronous Case

On Agreement Problems with Gossip Algorithms in absence of common reference frames

Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules

Consensus Algorithms are Input-to-State Stable

Multi-Hop Relay Protocols for Fast Consensus Seeking

On Distributed Optimization using Generalized Gossip

Linear Algebra and its Applications

Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D

Stability and Disturbance Propagation in Autonomous Vehicle Formations : A Graph Laplacian Approach

IN the multiagent systems literature, the consensus problem,

arxiv: v1 [math.oc] 24 Dec 2018

On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems

Scaling the Size of a Multiagent Formation via Distributed Feedback

Asynchronous Distributed Averaging on. communication networks.

Transcription:

Distributed Averaging with Flow Constraints Miroslav Barić and Francesco Borrelli University of California, Berkeley Department of Mechanical Engineering Technical report Abstract A network of storage elements is considered. Each storage element is an integrator which can exchange stored resource with neighboring elements. The flow between the elements as well as the amount of the resource that each element can store are subject to constraints. The problem of averaging the state of each element is addressed. The particularity of this problem compared to standard consensus based averaging is that the elements exchange not only the information but also the stored resource. Thus the elements are controlled coupled through the flow constraints. A distributed algorithm for flow control is proposed. The algorithm is non iterative and does not require centralized design procedure. The proposed scheme guarantees asymptotic convergence of states of all nodes to the same value equal to the average of initial values. Keywords balancing, consensus, flow constraints October 1, 2010

1 Introduction Networks of integrators are commonly used as a model of large scale systems comprising smaller sub systems whose primary purpose is to store some resource (water, goods, energy, data, etc.). Additionally these elements are often required to exchange the resource with other storage elements in the network or to satisfy external demand. In this note we address a problem which is often associated with networked storage devices- the balancing problem. In many applications it is preferable to store the resource in a manner which utilizes the available storage evenly across the whole network. One such example is a network of battery cells used in modern electric vehicles. In order to avoid potential damage caused by over or under charging of the individual cells, it is desirable to have the amount of charge (normalized to the cell s capacity) stored in the particular cell as close as possible to the normalized amount of charge stored in any other cell. Similar problem occurs in data storage networks in which balancing of the stored data helps avoiding data throughput bottlenecks caused by uneven distribution of the stored data over the nodes in the network. If the number of storage elements and their connections is large the centralized balancing may become intractable. In such cases one may consider consider decentralized or distributed algorithms for balancing. In its most distilled form the problem of balancing reduces to the problem of reaching an agreement on the stored amount of resource in each storage element, i.e. the problem of consensus. In general, consensus protocols are distributed control laws used by agents (dynamical systems) to reach an agreement on a common value of some variable. The attribute distributed in this context means that the applied control policy uses only local information, i.e. each agent exchanges information only with its neighbors, where a neighborhood is typically formalized through a notion of communication graph. Consensus protocols have been considered as a tool for parallel(distributed) computation, including asynchronous averaging protocols ( [1, 2]) and protocols that lead to an agreement on general functions ( [3,4]). Most recently consensus algorithms have been applied in areas like mobile robotics and distributed sensing [5,6]. For a more thorough overview of various applications the reader is referred to [7]. Convergence properties of consensus algorithms have been studied under different assumptions, utilizing existing theory and developing new tools. For instance in [8] the authors apply methods of algebraic graph theory to give conditions for convergence of consensus protocols for directed and undirected communication graphs, possibly time varying and in presence of comunication time delays. A general framework for stability analysis of var-

ious discrete time consensus protocols with time varying graph topologies is proposed in [9], introducing a notion of set valued Lyapunov functions, and in [10], using the concept of averaging maps. When considering application of consensus algorithms for resource balancing in storage networks, one must confront the facts that in every real instance of the problem the capacity of the links used for the resource transfers between the storage devices in the network is bounded and that each device can store limited amount of the resource. Nevertheless, the problem of consensus under constraints on dynamic behavior of agents, which is of particular importance for our purposes, has attracted relatively little attention. In [11] authors introduce consensus on several variables that are separated by hard constraints. Consensus in multi agent systems with various constraints (on the agents dynamics, connectivity and mission objectives) is discussed in [12]. In [13] the authors propose algorithms used by multiple agents for distributed computations of an estimate of some variable, with a restriction that the estimate of each agent lies in the corresponding constraint set. In particular, the proposed algorithm performs standard consensus type averaging in which an agent updates its estimate using a convex combination of its neighbors estimates and projects it on its set of constraints. The approach was subsequently applied to consensus problem of constrained linear systems in [14]. In [15] the authors deploy distributed receding horizon control approach for consensus in a time varying network of agents exhibiting single and double integrator dynamics and in the presence of control constraints. In the above cited work dealing with constrained consensus the constraintsareimposedonthecontroldecisionofeachagentorontheconsensus value. In all mentioned references the control decisions remain decoupled, i.e. each agent chooses a control within its own constraint set and does not consider current control decisions of other agents. In our problem, however, the agents (storage elements) exchange not only information locally, but are expected to exchange stored resources over links with limited capacity at each discrete time instance. Therefore, at each time instance the agents must reach an agreement on the value of the constrained flow through the links. Furthermore, the decision on the exact value of the flow must be reached in finitely many iteration and the generated sequence of generated flows should ensure convergence of the amount of the resource stored by each agent to the same common value. In this paper we propose one such protocol which allows agents to reach a quick agreement on local flow values. We show that the proposed algorithm, which is optimization based and in general generates non linear control policies, guarantees convergence

to the consensus value in which each agent s state equals to the state of its neighbors in the graph. We compare the performance of our algorithm to linear protocols considered in [2], which we modify to satisfy flow constraints in our problem. Computationally, our algorithm requires a solution to a quadratic program in each node of the network, while linear control policies demand very little implementational resources. On the other hand our algorithm is truly distributed in nature and does not require a centralized design procedure like the algorithm in [2]. Also, as indicated in Section 3.2, the proposed distributed averaging algorithm is applicable also in case of time varying network topology. Notation and basic definitions The set of real number is denoted by R, the set of non negative (greater or equal to 0) real numbers as R + and the set of strictly positive (greater than 0) real numbers as R ++. N is the set of natural numbers, i.e. N := {1,2,...}. N [q1,q2] stands for {q1,...,q2}, where q1,q2 N and q1 q2. Empty set is denoted as. Given any set S, 2 S denotes power set of S, i.e. the set of all subsets of S. Cardinality of a set S is denoted as S. 2 Problem Statement Consider a graph G = (N,E) where N = N [1,n] is the set of nodes and E N N is the set of unordered pairs, edges. Each node stores a resource and exchanges it with other nodes through the edges. We start with the definition of the resource flow between the nodes and the constraints on the flow defined by edge capacities. Definition 2.1 A flow is a function f: N N R such that for (i,j) N: (a) f(i,j) = f(j,i), and (b) f(i,j) c(i,j), where c: N N R ++ is the capacity function. We use a more convenient notation f ij = f(i,j). Each node i N is a discrete time dynamical system with the state update equation: x + i = x i +φ i, (1) where x + i denotes the state at the next discrete time instance given the current state x i. The state x i R represents the current amount of the

resource stored in the node i, whose level is controlled by the total net inflow φ i into the node i, defined as: φ i := j A i f ji. (2) The set of nodes adjacent to node i N is denoted as: A i = {j: (i,j) E}. For later convenience we introduce the sets A + i and A i denoting, respectively, adjacent nodes that supply the node i or demand the stored resource from it. Formally, given a flow f(, ): A + i := {j A i : f ji > 0}, A i := {j A i : f ji < 0}. (3a) (3b) We can separate the total net inflow φ i into the amount of the resource that is supplied to the node i, φ + i, and the amount that is taken from the node i, φ i : φ i = φ + i φ i, where: φ + i := f ji, and φ i := f ij. j A + i j A i Dynamics of the whole network can be compactly written as: x + = x+φ, where x = [x 1 x 2... x n ] T and φ = [φ 1 φ 2... φ n ] T. In addition to flow constraints, we also assume that the amount of resource that can be stored in each node is bounded, i.e. that x i X. Any control problem for a given network with the capacity c(, ) can be formalized as selection of the flow f(, ) (in this case the selection of E values f ij ) such that the constraints x i X, i N, holdforalltime. Westateourdistributedconsensusproblem as follows. Problem 1 (Constrained Distributed Consensus) At each time instance k select the flow f(, ) such that: (a) for a given x i (k) X the successor state x i (k + 1) X, and (b) for all i N the values of x i (k) converge to the same x X as k. The following restrictions apply at each time instance:

(a) each node i N knows the value of x i, (b) each node i N knows the capacity values c(i,j) and c(j,i) for j A i, (c) each node i can communicate and exchange information only with adjacent nodes, i.e. with nodes j A i, (d) the decision on the value of f ij is made in one of the nodes i or j. In order to have a solution for the above problem, an additional condition on the graph connectedness will be imposed later. Clearly, in that case the common limit value x corresponds to the average of initial values. We now propose an algorithm which solves the Problem 1 and discuss its properties. 3 Flow Constrained Consensus Protocol Assume that each node is associated to an agent which computes the flow values over the incident edges at each discrete time instance k. To make the presentation more clear, we divide the algorithm executed by each agent into computational part, where the flow values for each edge are computed, and implementational part in which the actual values of the flows are implemented. 3.1 The Algorithm First we present the algorithm used by each agent for computing of the flow values that are to be implemented in the second part of the process. Each agent computes these flow values using only the limited information from its neighborhood, as specified in Problem 1. The aim of each agent is to bring the state of the associated node close to the average of its own state and the states of its neighbors. In order to prevent agents to act selfishly in pursuing their goal of bringing or keeping their node s state close to the average, restrictions are imposed on the minimum flow that agents must allow in order to facilitate the resource transfer between the nodes in their neighborhood. As it will become apparent further in the text, such restrictions are essential for establishing convergence of the states x i (k) to the common value. At time instance k 0, agent at node i computes the candidate flows using the Algorithm 1. Here is the summary of the algorithm clarifying the key points: In steps 1 2 the agent i measures the state x i (k), receives the measurement of the states x j (k) from its neighbors, determines minimal

Algorithm 1 Computations done by an agent i at time k Require: x i (k) X and x j (k) X, j A i 1: x i = x i (k), x j = x j (k), j A i 2: x i min = min x j, x i max = max j {i} A i x i := (x i + ) x j Ai j /(1+ A i ) 3: A + i = {j A i : x j > x i }, A i = {j A i : x j < x i } 4: c + i = c i = j {i} A i x j { 0, if A + i =, j A + i min{x j x i,c(j,i)}, otherwise { 0, if A i =, j A i 5: define the constraints: 6: compute: min{x i x j,c(i,j)}, otherwise φ i low = min{xi max x i,x i x i min,c + i,c i }, i max = x i max x i, i min = x i x i min, { Φ i = (φ + i,φ i ): φ i low φ+ i min{c + i, i max} φ i low φ i min{c i, i min } }, ( φ + i, φ i ( ) =arg min xi +φ + (φ + i,φ i ) Φ i φ ) 2+ i x i i +ρφ + i φ i, for some ρ > 0. 7: compute f i ji, j A+ i, which minimize: j A + i ( ) i = j A f i 2,subj. ji to φ+ + fji i, i fji i min{x j x i,c(j,i)}, j A + i 8: compute f i ij, j A i, which minimize: j A i ( ) i = j A f i 2,subj. ij to φ fij i, i fij i min{x i x j,c(i,j)}, j A i

and maximal values of the states in nodes {i} A i and computes the average x i of the states x i and x j, j A i. In step 3 the agent i decides to be supplied only by neighboring nodes which have more resource stored and to supply only the neighboring nodes storing strictly less resource. Agent i expects 0 flow from the neighbors containing the amount of resource equal to x i. This is formally done, with some notational overloading, by assigning the nodes to the sets A + i and A i defined in (3). In step 4 the agent decides on upper bounds of the incoming and outgoing flow represented by variables c + i and c i, respectively. These bounds are determined by the corresponding edge capacities and the differences of the state x i and the states x j, j A + i A i. The crucial part of the algorithm is setting up the constraints on incoming and outgoing transfers φ + i and φ i in step 5. The total net inflow to node i, given by φ + i φ i must be smaller than the amount of resource the node can receive, i.e. smaller than (x i max x i ), and greater than the amount the node can give away, i.e. (x i x i min ). Lower bound for φ + i and φ i is set to the same value φ i low which is strictly positive for x i ( x i min max),xi and is equal to 0 only for xi { x i min max},xi. This wayanodewithstatesbetweenx i min andxi max actsasa middle man, always transferring the resource from nodes in A + i to A i. As it will become clear later, this fact is important for establishing convergence of values x i to the common consensus value. In step 6 the values of incoming and outgoing flows φ + i and φ i are computed, optimal from the node i s perspective. Chosen cost is the distance from the local average x i. The second term in the cost ρφ + i φ i is added to make the problem strictly convex and any choice of ρ (0,2) would do. This term can be used to tune the behavior of the algorithm, typically resulting in larger values φ i and φ + i for smaller ρ. In steps 7 and 8, given the values φ i and φ + i, the values of the flow for the edges incident to node i are selected. The upper bounds for fij i and fji i are determined by the capacity values for the corresponding edges and the differences x i x j. The constraints on φ i and φ + i in steps 3 5 ensure that the optimization problems in steps 7 and 8 are feasible.

The values f i ij for j A i and f i ji for j A + i computed at node i in general differ from the values f j ij and fj ji computed at an adjacent node j A i. Hence, thesevaluesmust be negotiated beforetheimplementation and the simplest { } way to achieve that is to implement the flow for which f ij = min fij i,fj ij. We will show later that with such choice the feasibility and required convergence as stated in Problem 1 are ensured for connected graph G. The implementation part of the consensus protocol is therefore given by the following procedure. Algorithm 2 Implementation of the flow by an agent i at time k Require: the sets A + i and A i and the values fij i, j A i, and fi ji, j A+ i, computed using Algorithm 1. 1: send: fij i to nodes j A i, fji i to nodes j A+ i, and fij i = 0 to nodes j / A i A + i 2: receive: f j ij from nodes j A i, f j ji from nodes j A+ i 3: for all j A i such that j < i do 4: if j A i then { } 5: implement f ij = min fij i,fj ij 6: else if j A + i then 7: implement f ji = min 8: else 9: implement f ij = 0 10: end if 11: end for { } fji i,fj ji The procedure given by Algorithms 1 2 clearly terminates in finite time. The requirement is, however, that the decisions made by agents are synchronized: the actual flow cannot be implemented before all agents complete their computations and exchange the results. It should be clear that the selected rule prescribing that maximum of the nodes i and j implements f ij is arbitrary and does not affect the resulting dynamic behavior of the network. Next step is to demonstrate that the procedure in Algorithms 1 2 can actually be used to solve the Problem 1.

3.2 Convergence We start with some auxiliary observations highlighting relevant features of the Algorithms 1 2. In particular, we show that the flow value f ij will be non zero if and only if the difference x i x j is non zero and use this to prove local contractive property of the generated flows, namely that the successor state x + i is in the relative interior (if one exists) of the interval [x i min,xi max]. Finally, we show that the generated flow values change continuously with the state vector x. All these properties are then used to prove the main result stated by Theorem 3.1. Lemma 3.1 (a) Let ij := x i x j, for i N and j A i. The value fij i computed by the agent i using the Algortihm 1 equals 0 if and only if ij = 0. In particular, fij i > 0 if and only if ij > 0 and fji i > 0 if an only if ij < 0. (b) Let φ i be the actual (implemented) net inflow for the node i, i.e. given the actual flow f(, ) computed in Algorithm 2, φ i = j A i f ji. Then x i +φ i [ x i min,xi max], where x i min and x i max are specified in Algorithm 1, step 1. Moreover, x i +φ i { x i min,xi max} if and only if xi = x i min = xi max. (c) Let φ i : X n R be the mapping from x X n to the total net inflow of the node i computed by the Algorithms 1 2. The mapping φ i ( ) is continuous. The proof of Lemma 3.1 is given in Appendix A. We can now make a claim about the distributed algorithm that solves the Problem 1. Theorem 3.1 Let the graph G = (N,E) be connected. Then the distributed algorithm given by Algorithms 1 2 solves the Problem 1. In particular, given an initial x i (0), sequences {x i (k)} k=0 for i N converge to the mean value i N x i(0). 1 N We remark that the the properties of our algorithm stated in Lemma 3.1 satisfy Assumption 1 in [9], which considers much more general framework including time varying network topologies. In that respect, we can simply refer to the result in [9] and claim the convergence for the general case of time varying graphs. For the particular case of time invariant graphs the proof of Theorem 3.1 is provided in Appendix B.

4 Numerical Example In this section we compare the proposed distributed averaging algorithm to the fastest distributed linear averaging (FDLA) algorithm in [2]. In [2] the authors consider flows, for a given x, generated by linear feedback laws: f ij (x) = k ij (x i x j ), (i,j) E. The gains k ij which ensure optimal convergence rate (with respect to the considered asymptotic convergence factor, cf. [2]) can be obtained by solving the following optimization problem: min k ij ρ(w 1 T 1/n), subj. to W = I B diag(k ij )B T, (4a) (4b) where ρ( ) denotes the spectral radius of the argument matrix, B is the incidence matrix for the considered network and arbitrary graph orientation, diag(k ij ) represents diagonal matrix with gains k ij on the diagonal, 1 is the vector whose entries are all equal to 1 and I is the identity matrix of appropriate dimension. Optimization problem (4) must be solved for the given network as a whole. As shown in [2], for symmetric weights, i.e. for k ij = k ji, the problem (4) can be solved by convex optimization. In order to satisfy capacity constraints imposed by Problem 1 we augment the constraints in (4) with the set of linear inequalities: k ij (x max x min ) min{c(i,j),c(j,i)}, (i,j) E, (5) where x min and x max correspond to the minimal and maximal possible value of the integrator state. We consider the network used as an example in [2], consisting of 8 nodes and 17 edges whose graph is depicted in Figure 1. The performance of our 1 3 4 6 7 8 2 5 Figure 1: Graph of the network used in the numerical example. distributed algorithm is compared to the FDLA algorithm in [2] with and without constraints (5) on linear control laws. We assume that all capacities in the network are equal to 1, i.e. c(i,j) = c(j,i) = 1 for all (i,j) E. The

8 1 7 6 0.5 5 x(k) 4 fij(k) 0 3 2-0.5 1 0 0 1 2 3 4 5 6 7 8 9 10 k (a) State evolution. -1 0 1 2 3 4 5 6 7 8 9 10 k (b) Generated flows. Figure 2: Time evolution of states and flows for the Algorithms 1 2. Generated flows respect capacity bounds [ 1, 1]. 8 7 1.5 1 x(k) 6 5 4 3 2 fij(k) 0.5 0-0.5-1 1 0 0 1 2 3 4 5 6 7 8 9 10 k (a) State evolution. -1.5-2 0 1 2 3 4 5 6 7 8 9 10 k (b) Generated flows. Figure 3: Time evolution of states and flows for the FDLA algorithm as originally proposed in [2] without capacity constraints. The capacity bounds [ 1, 1] are violated. initial amount of resource in each node(the state) is selected randomly in the interval [0, 10]. Time evolution of the states and the flow values generated by the proposed Algorithms 1 2 is depicted in Figure 2. We computed 1 the gains k ij for the FDLA algorithm without imposing the constraints (5). The resulting spectral radius ρ(w 1 T 1/n) is 0.6, which is the value reported for the same example in [2]. Performance of the unconstrained linear averaging (FDLA) algorithm is shown in Figure 3. Finally, we computed the gains for the FDLA algorithm with capacity constraints (5), resulting with the spectral radius ρ(w 1 T 1/n) equal to 0.84774. The performance of the constrained FDLA algorithm is shown in Figure 4. As can be seen clearly from the Figures 2 and 3, our distributed algo- 1 With the help of YALMIP [16].

8 7 0.6 0.4 x(k) 6 5 4 3 2 fij(k) 0.2 0-0.2-0.4 1 0 0 1 2 3 4 5 6 7 8 9 10 k (a) State evolution. -0.6-0.8 0 1 2 3 4 5 6 7 8 9 10 k (b) Generated flows. Figure 4: Time evolution of states and flows for the FDLA algorithm in [2] with imposed capacity constraints. rithm, which in general induces non linear averaging control laws, provides practically the same performance as the fastest linear distributed averaging algorithm, while generating flows which satisfy capacity constraints. On the other hand, the original FDLA algorithm as presented in [2] generates flows which do not conform to the existing capacity constraints c(i,j) 1, as can be seen in Figure 2(b). If we introduce capacity constraints (5) in the computation of FDLA gains, the performance of the linear algorithm expectedely deteriorates, as shown in Figure 4. 5 Conclusion We propose an algorithm for distributed averaging (consensus) in networks of storage elements subject to constraints on the states and the control inputs. Unlike standard consensus problems in the literature where each agent is actuated independently, in our problem we include constraints on the flows between the individual storage elements. In a distributed setup this type of constraints requires an agreement between the agents on the flows along the common edges and this agreement must be reached in finitely many iterations in order to generate the appropriate flow between the storage elements. We compare our algorithm to the distributed averaging scheme reportedin[2], basedonlinearpolicies. Asshownonthesimpleexample, our algorithm provides fast convergence, while satisfying all constraints, which is not an inherent feature of the linear distributed algorithm. As a future research direction, one may consider the extension in which certain nodes can select the set of their neighbors with the aim of improving the convergence rate. Another research topic of immediate interest is the in-

fluence of time delays, measurement and communication noise and dynamic disturbances. References [1] J. Tsitsiklis, D. Bertsekas, and M. Athans, Distributed Asynchronous Deterministic and Stochastic Gradient Optimization Algorithms, IEEE Trans. Automat. Contr., vol. 31, no. 9, pp. 803 812, 1986. [2] L. Xiao and S. Boyd, Fast Linear Iterations for Distributed Averaging, Syst. Control Lett., vol. 53, pp. 65 78, 2004. [3] J. Cortés, Distributed algorithms for reaching consensus on general functions, Automatica, vol. 44, pp. 726 737, 2008. [4] D. Bauso, L. Giearré, and R. Pesenti, Non linear protocols for optimal distributed consensus in networks of dynamic agents, Syst. Control Lett., vol. 55, pp. 918 928, 2006. [5] A. Jadbabaie, J. Lin, and A. Morse, Coordination of groups of mobile autonomous agents using nearest neighbor rules, IEEE Trans. Automat. Contr., vol. 48, no. 6, pp. 988 1001, 2003. [6] R. Olfati-Saber and J. Shamma, Consensus Filters for Sensor Networks and Distributed Sensor Fusion, in Proc. 44th IEEE Conf. on Decision and Control, Sevilla, Spain, 2005, pp. 6698 6703. [7] R. Olfati-Saber, J. Fax, and R. Murray, Consensus and Cooperation in Networked Multi-Agent Systems, Proc. IEEE, vol. 95, no. 1, pp. 215 233, 2007. [8] R. Olfati-Saber and R. Murray, Consensus problems in networks of agents with switching topology and time-delays, IEEE Trans. Automat. Contr., vol. 49, no. 9, pp. 1520 1533, 2004. [9] L. Moreau, Stability of multiagent systems with time-dependent communication links, IEEE Trans. Automat. Contr., vol. 50, no. 2, pp. 169 182, 2005. [10] J. Lorenz and D. Lorenz, On conditions for convergence to consensus, IEEE Trans. Automat. Contr., vol. 55, no. 7, pp. 1651 1656, 2010.

[11] K. Moore and D. Lucarelli, Forced and constrained consensus among cooperating agents, in Proc. of IEEE Intl. Conf. on Networking, Sensing and Control, 2005, pp. 449 454. [12] M. Franceschelli, M. Egerstedt, A. Giua, and C. Mahulea, Constrained invariant motions for networked multi-agent systems, in Proc. American Control Conf., St. Louis, MO, USA, jun 2009, pp. 5749 5754. [13] A. Nedić and A. Ozdaglar, Distributed Subgradient Methods for Multi-Agent Optimization, IEEE Trans. Automat. Contr., vol. 54, no. 1, pp. 48 61, 2009. [14] B. Johansson, A. Speranzon, M. Johansson, and K. Johansson, On decentralized negotiation of optimal consensus, Automatica, vol. 44, pp. 1175 1179, 2008. [15] G. Ferrari-Trecate, L. Galbusera, M. Marciandi, and R. Scattolini, Model Predictive Control Schemes for Consensus in Multi Agent Systems with Single and Double Integrator Dynamics, IEEE Trans. Automat. Contr., vol. 54, no. 11, pp. 2560 2572, 2009. [16] J. Löfberg, YALMIP : A toolbox for modeling and optimization in MATLAB, in Proceedings of the CACSD Conference, Taipei, Taiwan, 2004, available from http://control.ee.ethz.ch/~joloef/yalmip.php. [17] B. Bank, J. Guddat, D. Klatte, B. Kummer, and K. Tammer, Non- Linear Parametric Optimization. Birkhäuser, 1983. A Proof of Lemma 3.1 } (a) The relation { ij = 0 fij i = 0 is clear from the selection of the sets A + i anda i instep3ofthealgorithm1andfromthestep9inalgorithm 2. Suppose that ij = x i x j > 0. If x i min < x i < x i max, from step 5 of Algorithm 1 it follows that the lower bound for φ + i and φ i is φ i low > 0. The way the values of fij i and fi ji are computed in steps 7 8, i.e. by minimization of the 2 norm of the vector comprising all fij i and fi ji, implies that the corresponding fij i > 0. In case x i = x max, φ low = 0 and φ + i = 0 (since c + i = 0 as A + i = ). Since x i x j > 0, the average x i < x i max = x i. In step 6 of Algorithm 1 the cost consists only of one term (since φ + i = 0), i.e. the distance of the state x i from the average

x i < x i. It is clear that this cost can be reduced by selecting φ i > 0 and consequently fij i > 0. In a similar way we can demonstrate that } { ij < 0 fji i > 0. (b) We denote the values implemented instead of the computed φ + i and φ i as (φ + i )impl and (φ i )impl, respectively. Therefore, φ i = (φ + i )impl (φ i )impl. Note that: x i +(φ + i )impl x i +x i min x i +φ i, and x i +φ i x i +x i max x i (φ i )impl. This is due to the fact that φ + i x i max x i, φ i x i x i min (Algorithm 1, step 5) and due to inequalities (φ + i )impl φ + i and (φ i )impl φ i, which follow from the min rule used in steps 5 and 7 of the Algorithm 2. Therefore, we have: (φ + i )impl +x i min x i +φ i x i max (φ i )impl. (A.6) Suppose that x min < x i < x max. This implies that A + i and A i. From part (a) of the Lemma it follows { that } for some j A i the value of f j ij > 0 and therefore f ij = min fij i,fj ij > 0. This further implies that (φ i )impl > 0 and, by the same reasoning, that (φ + i )impl > 0. Together with(a.6)thisgivestherelation: { x min < x i < x max x i +φ ( x i min max)},xi. Suppose now that x min < x i = x max. In that case φ + i = (φ + i )impl = 0 and we have: x i min x i +φ i = x i max (φ i )impl. (A.7) In step 6 of Algorithm 1 we minimize only the difference of x i φ i from the averge x i < x i = x i max. Note that for the optimal φ i we have that x i φ i x i > x min. Since (φ i )impl φ i, and by the same arguments used in case when x min < x i < x max we have that (φ i )impl > 0, it follows that x min < x i +φ i < x max. We can apply a similar procedure to show the relation: {x min = x i < x max x i +φ i (x min,x max )}. Finally, the relation {x min = x i = x max x i +φ i = x min = x max } follows directly since in that case φ i = 0 (from part (a)). (c) Consider mappings c + i : Xn R + and c i : Xn R + which assign values of variables c + i and c i to the vector x X n as specified in step 4 of the Algorithm 1. Note that c + i ( ) and c i ( ) are continuous in x. Namely, when the sets A + i and A i are fixed, the functions c + i ( ) and

c i ( ) are given as sums of continuous functions in x (minimum between constant capacity values and affine functions). At points where the sets A + i and A i change for some small perturbation of x, i.e. where x i = x j for some j A i, the functions c + i ( ) and c i ( ) behave locally as x j x i and x i x j, respectively, i.e. as affine functions in x. The optimization problem in 6 of the Algorithm 1 can be seen as a parametric quadratic optimization, with the parameter vector consisting of: φ i low, x i, x i, c + i, c i, i max and i min. Note that the resulting parametric program is strictly convex with polyhedral (convex) constraints on the parameters and that the solution set is non empty. From the standard result on parametric programming for this class of problems (cf. [17]), there exists a continuous function which maps the parameter vector into optimal values φ + i and φ i. In particular, since our problem is strictly convex, this function is a unique continuous function of the parameter vector. Since each component of the parameter vector can be evaluated by a continuous function of the vector x, and since composition of continuous functions is continuous, it follows that the mappings from the vector x to the optimal solutions φ + i and φ i are also continuous. By the same argument we conclude that the mappings from x to the optimal fij i and fji i computed in steps 7 and 8 for j A i are continuous. Finally, the implementedflowovertheedge(i,j)istheminimumbetweenf i ij andfj ij or equal to 0, as stated in Algorithm 2. As the minimum of continuous functions defined over the same domain (a subset of X n ) is continuous, the implemented flow value f ij is given by a continuous mapping of x. Consequently the actual net inflow φ i depends continuously on x. B Proof of Theorem 3.1 First, we introduce some shorthand notation. For x R n, let max(x) := max i N {x i} and min(x) := min i N {x i} The dynamics of the integrator network in which the flow is computed using Algorithms 1 and 2 can be compactly described by the equation: x(k +1) = f cl (x(k)), where f cl (x) = x + φ(x) and φ(x) := [φ 1 (x),...,φ n (x)]. From Lemma (A.6)(c), the function f cl ( ) is a continuous function.

We divide the proof into three parts. In the first part we show that the limits lim k max(x(k)) and lim k min(x(k)) exist. In the second part we show that these two limits are the same, and in the third part, that all states converge to that value, which is the average of initial conditions. Consideraninitialx(0)suchthatthereexistsatleastonei N suchthat x i (0) < max(x(0). FromLemma3.1(b)wehavethatsequences{max(x(k)} k N0 and{min(x(k)} k N0 are,respectively,monotonicallydecreasing(i.e.max(x(k+ 1)) max(x(k)) and monotonically increasing. Since they are both bounded, there exist x max = lim k max(x(k)) and x min = lim k min(x(k)), an, clearly, x min x max. Nextwedemonstratethatx min = x max. LetS k = [min(x(k)),max(x(k))]. Consider a sequence {x i (k)} k N0 for some i N. Clearly, x i (k) S k+j for j 0 since S k+1 S k. Since all the sets S k are compact, by definition of compactnessthereexistsasubsequence{x i (p j )} j N suchthatforx i (p 0 ) S k the sequence {x i (p j )} j N converges to a point in S k = k l=1 S l. This way we can construct a subsequence {x i (s k )} k N which converges to a point in S = j=1 S j = [x min,x max]. The second equality comes from the definition of lim and the fact that x max is the largest lower bound for max(x(k)) and analogously that x min is the smallest upper bound for min(x(k)) and that both x max and x min belong to S k for all k N 0. We have, therefore, established, for the considered sequence {x(k)} k N0 the existence of a subsequence {x(p k )} k N which converges to some x [x min,x max]. The following holds: x max = lim p k max(x(p k)) = = lim p k max(x(p k+m)) = (A.8a) (A.8b) = lim p k max(fp k+m p k cl (x(p k ))) = (A.8c) = max(f p k+m p k cl ( lim x(p k))), p k (A.8d) where an integer m 1 in (A.8b) is arbitrary. In step (A.8d) we utilize the fact that both max( ) and f cl ( ) are continuous, and so is their composition. From (A.8) it follows that for an arbitrary number m 1 of dynamic iterations the maximal element in the vector x(p k+m ) remains equal to x max. In view of Lemma 3.1(b) this is possible if and only if for each i N the state x i (p k ) is equal to the state of each of its neighbors x j (p k ),j A i. Since the graph G is connected by hypothesis, we conclude that x max = x 1 = = x n = x min. Finally, since min(x(k)) x i (k) max(x(k)) for all k and sequences

{max(x(k)} k N0 and{min(x(k)} k N0 areconvergingtothesamevaluex max = x min, we conclude that all sequences {x i(k)} k N, for i N, converge to the same value. Since the sum of all x i must be equal to the sum of x i (0), this value is the average of x i (0).