Distributed Algorithms for Optimization and Nash-Games on Graphs

Size: px
Start display at page:

Download "Distributed Algorithms for Optimization and Nash-Games on Graphs"

Transcription

1 Distributed Algorithms for Optimization and Nash-Games on Graphs Angelia Nedić ECEE Department Arizona State University, Tempe, AZ 0

2 Large Networked Systems The recent advances in wired and wireless technology lead to the emergence of large-scale networks Internet Mobile ad-hoc networks Wireless sensor networks The advances gave rise to new network applications including Decentralized network operations including resource allocation, coordination, learning, estimation Data-base networks Social and economic networks As a result, there is a necessity to develop new models and tools for the design and performance analysis of such large complex dynamics systems 1

3 New Applications - New Challenges Lack of central authority The centralized network architecture is not possible Size of the network / Proprietary issues Sometimes the centralized architecture is not desirable Security issues / Robustness to failures Network dynamics Mobility of the network The agent spatio-temporal dynamics Network connectivity structure is varying in time Time-varying network The network itself is evolving in time The challenge is to control, coordinate, design protocols and analyze operations/performance over such networks 2

4 Goals: Control-optimization algorithms deployed in such networks should be Completely distributed relying on local information and observations Robust against changes in the network topology Easily implementable 3

5 Many Examples Bio-inspired systems Self-organized systems Social networks Opinion dynamics Averaging dynamics Multi-agent systems 4

6 Example: Support Vector Machine (SVM) Centralized Case Given a data set {z j, y j } p j=1, where z j R d and y j {+1, 1} ξ j x 1 1 Find a maximum margin separating hyperplane x Centralized (not distributed) formulation min F (x) ρ x R d,ξ R p 2 x 2 + p max{ξ j, 1 y j x, z j } j=1 5

7 SAMSI Workshop on Optimization Research Triangle Park Aug Sept. 2, 2016 Support Vector Machine (SVM) - Decentralized Case Given n locations, each location i with its data set {zj, yj }j Ji, where zj Rd and yj {+1, 1} Find a maximum margin separating hyperplane x?, without disclosing the data sets n X X ρ 2 min kxk + max{ξj, 1 yj hx, zj i} 2n x Rd,ξ Rp i=1 j J i min F (x) = x Rd n X fi (x) i=1 X ρ 2 fi (x) = kxk + min max{ξj, 1 yj hx, zj i} ξj R 2n j Ji 6

8 where Each function f i is convex over R n General Problem of Interest minimize F (x) = f i (x) i=1 subject to x X (1) Each f i is a local private objective of an agent (node) i Agents are embedded in a communication graph and can locally exchange their estimates (iterates) The set X is closed convex and known to every agent f 1 (x 1,...,x n ) f 2 (x 1,...,x n ) f m (x 1,...,x n ) The goal is to solve the overall network problem subject to the distributed knowledge of the objective component functions and subject to local agent interactions 7

9 Consensus Problem Consider a connected network of m- agent, each knowing its own scalar value x i (0) at time t = 0. The problem is to design a distributed and local algorithm ensuring that the agents agree on the same value x, i.e., lim x i(t) = x for all i. t Algorithm for a Static Undirected Network At every time t, each agent i maintains a variable x i (t), it shares the variable with its neighbors and performs the following updates x i (t + 1) = j Ni a ij x j (t) for all i where N i is the set of neighbors of agent i in the network (including itself), and a ij > 0 with j N i a ij = 1. 8

10 Graph Connectivity Graph and Mixing (Weight) Matrix The connectivity graph G = ([m], E) is the (undirected) graph with node set [m] = {1,..., m} and the edge set Weight Matrices E = {i j node i receives & sends information from node j}. Introduce the weight matrix A which is compliant with the connectivity graph G = ([m], E) and adds self-loops: { a ij > 0 if either i j E k or j = i A ij = 0 otherwise Assumption 1:[Graph and Weights] The graph G = ([m], E) is connected. The matrix A is row-stochastic (nonnegative entries that sum to 1 in each row). 9

11 Basic Result Proposition Under Assumption 1, the agent values converge to a consensus with a geometric rate. In particular, lim x i(k) = α for all i, k where α is some convex combination of the initial values x 1 (0),..., x m (0); i.e., α = m j=1 π jx j (0) with π j > 0 for all j, and m j=1 π j = 1. Furthermore max i x i (k) min j where β = 1 mη m 1. x j (k) ( max i x i (0) min j x j (0) ) β k m 1 for all k, The convergence rate is geometric Simplified from the case of time-varying graphs in Tsitsiklis 1984, AN and Ozdaglar π is the Perron-Frobenius left-eigenvector of A. 10

12 Write the system compactly as and consider the function Proof Outline x(k + 1) = Ax(k) V (k) = max i x i (k) min j x j (k) as a measure of progress toward the consensus (it can be viewed as the size of disagreement among the agents at time k). Under the stochasticity of the weights A(k), the function V (k) is nonincreasing in time. Further, consider the behavior of the system over a window of time x(k + 1) = A k s+1 x(s). Under the connectivity and weight conditions of Assumption 1, the matrices Φ((k + 1)B 1, kb) are primitive for some positive integer B that depends on m A is primitive when aij > 0 for all entries 11

13 This implies a strict significant decrease from kb to (k + 1)B for the function V (k) = max i x i (k) min j x j (k): the difference V (kb) V ((k +1)B) can be bounded by a constant term independent of k, but dependent on B. This, and the nonincreasing property of V (k) yield the convergence and the convergence rate estimate. 12

14 Average Consensus We want agents to agree on the average of their initial values, i.e., we want lim x i(k) = 1 x j (0) for all i. k m j=1 This will happen when the weight matrix A is doubly stochastic (nonnegative entries with each row and each column sum is equal to 1). Proposition 3 [Nedić, Olshevsky, Ozdaglar, Tsitsiklis 09] Under Assumption 1 and the doubly stochasticity of the weights, the agent values converge to the average consensus with a geometric rate. Specifically, lim x i(k) = 1 k m x i(k) 1 x j (0) m c j=1 x j (0) for all i, j=1 where the constant c > 0 depends on η and m, and η = min ( 1 η 4m 2 ) k for all i and k, i,j [m] a ij 0 {a ij } 13

15 Further Results The consensus and the average consensus results hold when the connectivity assumption at each time instant k is replaced by a connectivity over a bounded time interval Nedić, Olshevsky, Ozdaglar and Tsitsiklis 09 (NOOT09) Consensus in the presence of delays (Nedić and Ozdaglar 08, Bliman, Nedić and Ozdaglar) Quantization effects (Kashyap, Srikant and Başar 06, Carli et al. 07, Kar and Moura 07, NOOT09) Consensus over random networks (Tahbaz-Salehi and Jadbabaie 08, Hatano and Mesbahi 05, Touri 11) Consensus over a network with noisy links (Kar and Moura 07, Touri and Nedić 09) Consensus with various aspects: A.S. Morse, J. Lin, B.D.O. Anderson, M. Cao, B. Touri, W. Rei, D. Shah, A. Scaglione, M. Rabbat, M. Johansson, K. Johansson, J. Lorenz, J. Cortes, L. Pavel, K. Kvaternik, M. Hong, B. Gharesifard, A. Dominguez- Garcia, C. Hadjicostis, S. Sundaram, N. Vaidya,... 14

16 Consensus as Optimization Problem: Static Network Given a connected graph and a stochastic weight matrix A (positive diagonal) Consensus problem: determining an x such that Ax = x This is a Fixed-Point Problem: under certain contractive properties of A, fixedpoint iterations converge to a fixed point. x k+1 = Ax k Merit-Function Approach: min Ax x 2 x Rm Consensus algorithms can be viewed are distributed algorithms for minimization of some merit function associated with the underlying fixed-point problem 15

17 Consensus as Optimization Problem: Laplacians Given a graph and a weight matrix A Letting L A be the Laplacian associated with (G, A) consensus problem: determining an x such that L A x = 0 This is a Zero-Point Problem that is moved to a fixed point problem of the form where I is the identity. (I αl A ) x = x for any α. 16

18 Computational Model for Optimization over Network Optimization problem - classic Problem data distributed - new 17

19 where Each function f i is convex over R n General Problem of Interest minimize F (x) = f i (x) i=1 subject to x X (2) Each f i is a local private objective of an agent (node) i Agents are embedded in a communication graph and can locally exchange their estimates (iterates) The set X is closed convex and known to every agent f 1 (x 1,...,x n ) f 2 (x 1,...,x n ) f m (x 1,...,x n ) 18

20 How Agents Manage to Optimize Global Network Problem? minimize F (x) = f i (x) subject to x X R n i=1 Each agent i will generate its own estimate x i (t) of an optimal solution to the problem Each agent will update its estimate x i (t) by performing two steps: Consensus-like step (mechanism to align agents estimates toward a common point) Local gradient-based step (to minimize its own objective function) C. Lopes and A. H. Sayed, Distributed processing over adaptive networks, Proc. Adaptive Sensor Array Processing Workshop, MIT Lincoln Laboratory, MA, June A. H. Sayed and C. G. Lopes, Adaptive processing over distributed networks, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E90-A, no. 8, pp , A. Nedić and A. Ozdaglar On the Rate of Convergence of Distributed Asynchronous Subgradient Methods for Multi-agent Optimization Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, USA, 2007, pp A. Nedić and A. Ozdaglar, Distributed Subgradient Methods for Multi-agent Optimization IEEE Transactions on Automatic Control 54 (1) 48-61,

21 Distributed Optimization Algorithm minimize F (x) = f i (x) subject to x X R n i=1 At time t, each agent i has its own estimate x i (t) of an optimal solution to the problem At time t + 1, agents communicate their estimates to their neighbors and update by performing two steps: Consensus-like step to mix their own estimate with those received from neighbors w i (t + 1) = a ij x j (t) (a ij = 0 when j / N i ) j=1 Followed by a local gradient-based step x i (t + 1) = Π X [w i (t + 1) α(t) f i (w i (t + 1))] where Π X [y] is the Euclidean projection of y on X, f i is the local objective of agent i and α(t) > 0 is a stepsize 20

22 Intuition Behind the Algorithm: It can be viewed as a consensus steered by a force : x i (t + 1) = w i (t + 1) + (Π X [w i (t + 1) α(t) f i (w i (t + 1))] w i (t + 1)) = w i (t + 1) + (Π X [w i (t + 1) α(t) f i (w i (t + 1))] Π X [w i (t + 1)]) }{{} small stepsize α(t) w i (t + 1) α(t) f i (w i (t + 1)) = a ij x j (t) α(t) f i a ij x j (t) j=1 j=1 Matrices A that lead to consensus, also yield convergence of an optimization algorithm 21

23 Convergence Result for Static Network Convex Problem: Let X be closed and convex, and each f i : R n R be convex with bounded (sub)gradients over X. Assume the problem min x X m i=1 f i(x) has a solution. Stepsize Rule: Let the stepsize α(t) be such that t=0 α(t) = and t=0 α2 (t) <. Network: Let the graph ([m], E ) be directed and strongly connected. A = [a ij ] of agents weights be doubly stochastic. Then, Let the matrix lim x i(t) = x for all i, t where x is a solution of the problem. 22

24 Proof Outline: Use m i=1 x i(t) x 2 as a Lyapunov function, where x is a solution to the problem Due to convexity and (sub)gradient boundedness, we have x i (t+1) x 2 w i (t+1) x 2 2α(t) (f i (w i (t + 1)) f i (x ))+α 2 (t)c 2 i=1 i=1 By w i (t + 1) = m j=1 a ij x j (t) and the doubly stochasticity of A, we have x i (t+1) x 2 x j (t) x 2 2α(t) (f i (w i (t + 1)) f i (x ))+α 2 (t)c 2 i=1 j=1 i=1 i=1 Thus, letting s(t + 1) = 1 m m i=1 x i(t + 1) we see x i (t + 1) x 2 x j (t) x 2 2α(t) i=1 j=1 + 2α(t) (f i (s(t + 1)) f i (x )) i=1 (f i (s(t + 1)) f i (w i (t + 1))) + α 2 (t)c 2 i=1 23

25 Letting F (x) = m i=1 f i(x) and using (sub)gradient boundedness, we find x i (t + 1) x 2 x j (t) x 2 i=1 } {{ } V (t+1) j=1 } {{ } V (t) + 2α(t)C i=1 For convergence it suffices two relations to hold: 2α(t) (F (s(t + 1)) F (x )) }{{} 0 s(t + 1) w i (t + 1) + α 2 (t)c 2 t=0 α(t)c m i=1 s(t + 1) w i(t + 1) <, which allows us to conclude F (x) = m i=1 f i(x) F (s(t + 1)) F (x ) 0 along a subsequence But this is not enough, we also want to show that x i (t) converge to the same optimal solution for all agents i. This will hold if we can show s(t + 1) x i (t + 1) 0 as t for all i 24

26 The trouble is in showing s(t + 1) x i (t + 1) 0 as t for all i, which is exactly where the network impact is it is important to know that the network is diffusing (mixing) the information fast enough. Formally speaking, the rate of convergence of A t to its limit is critical. When the network is connected, the matrices A t converge to the matrix 1 m 11, as t The convergence rate is [At ] ij 1 m qt, where q (0, 1) We have for arbitrary 0 τ < t x i (t + 1) = w i (t + 1) + (Π X [w i (t + 1) α(t) f i (w i (t + 1))] w i (t + 1) ) }{{} e i (t) = a ij x j (t) + e i (t) = = j=1 [A t+1 τ ] ij x j (τ) + j=1 t k=τ+1 j=1 [A k ] ij e j (t k) + e i (t) 25

27 Similarly, for s(t + 1) = 1 m m i=1 x i(t + 1) we have s(t+1) = s(t)+ 1 1 e j (t) = = m m x j(τ) + Thus, j=1 x i (t + 1) s(t + 1) q t+1 τ + j=1 j=1 m j=1 x j (τ) + t k=τ+1 j=1 t k=τ+1 j=1 1 m e j(t) + e i (t) 1 m e j(t k) + q k e j (t k) j=1 1 m e j(t) By choosing τ such that e(t) ɛ for all t τ and then, using some properties of the sequences involved in the above relation, we show x i (t + 1) s(t + 1) 0 26

28 Related Papers (time-varying graphs) AN and A. Ozdaglar Distributed Subgradient Methods for Multi-agent Optimization IEEE Transactions on Automatic Control 54 (1) 48-61, The paper looks at a basic (sub)gradient method with a constant stepsize S.S. Ram, AN, and V.V. Veeravalli Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization. Journal of Optimization Theory and Applications 147 (3) , The paper looks at stochastic (sub)gradient method with diminishing stepsizes and constant as well S.S. Ram, A.N, and V.V. Veeravalli A New Class of Distributed Optimization Algorithms: Application to Regression of Distributed Data, Optimization Methods and Software 27(1) 71 88, The paper looks at extension of the method for other types of network objective functions 27

29 Other Extensions w i (t + 1) = a ij (t)x j (t) (a ij (t) = 0 when j / N i (t)) j=1 x i (t + 1) = Π X [w i (t + 1) α(t) f i (w i (t + 1))] Extensions include Gradient directions f i (w i (t + 1)) can be erroneous x i (t + 1) = Π X [w i (t + 1) α(t)( f i (w i (t + 1) + ϕ i (t + 1))] [Ram, Nedić, Veeravali 2009, 2010, Srivastava and Nedić 2011] The links can be noisy i.e., x j (t) is sent to agent i, but the agent receives x j (t)+ɛ ij (t) [Srivastava and Nedić 2011] The updates can be asynchronous; the edge set E (t) is random [Ram, Nedić, and Veeravalli - gossip, Nedić 2011] 28

30 The set X can be X = m i=1 X i where each X i is a private information of agent i x i (t + 1) = Π Xi [w i (t + 1) α(t) f i (w i (t + 1))] [Nedić, Ozdaglar, and Parrilo 2010, Srivastava and Nedić 2011, Lee and AN 2013] Different sum-based functional structures [Ram, Nedić, and Veeravalli 2012] S. S. Ram, AN, and V.V. Veeravalli, Asynchronous Gossip Algorithms for Stochastic Optimization: Constant Stepsize Analysis, in Recent Advances in Optimization and its Applications in Engineering, the 14th Belgian-French-German Conference on Optimization (BFG), M. Diehl, F. Glineur, E. Jarlebring and W. Michiels (Eds.), 2010, pp A. Nedić Asynchronous Broadcast-Based Convex Optimization over a Network, IEEE Transactions on Automatic Control 56 (6) , S. Lee and A. Nedić Distributed Random Projection Algorithm for Convex Optimization, IEEE Journal of Selected Topics in Signal Processing, a special issue on Adaptation and Learning over Complex Networks, 7, , 2013 K. Srivastava and A. Nedić Distributed Asynchronous Constrained Stochastic Optimization, IEEE Journal of Selected Topics in Signal Processing 5 (4) , Uses different weights 29

31 Advantages/Disadvantages - Other Possibilities Network can be used to diffuse information to all the nodes in that is not globally available The speed of the information spread depends on networks connectivity as well as communication protocols that are employed Mixing can be slow but it is stable Error/rate estimates are available and scale as m 3/2 at best in the size m of the network Problems with special structure - may have better rates - Jakovetić, Xavier, Moura Drawback: Doubly stochastic weights are required: Can be accomplished with some additional weights exchange in bi-directional graphs Difficult to ensure in directed graphs D. Jakovetić, J. Xavier, J. Moura Fast Distributed Gradient Methods IEEE TAC 56 (5) , 2014 B. Gharesifard and J. Cortes, Distributed strategies for generating weight-balanced and doubly stochastic digraphs, European Journal of Control, 18 (6), ,

32 Push-Sum Based Computational Model The method is based on a protocol proposed by D. Kempe, A. Dobra, and J. Gehrke Gossip-based computation of aggregate information Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, , 2003 and studied further in F. Benezit, V. Blondel, P. Thiran, J. Tsitsiklis, and M. Vetterli Weighted gossip: distributed averaging using non-doubly stochastic matrices In Proceedings of the 2010 IEEE International Symposium on Information Theory, Jun Related Work: Static Network A.D. Dominguez-Garcia and C. Hadjicostis. Distributed strategies for average consensus in directed graphs. In Proceedings of the IEEE Conference on Decision and Control, Dec C. N. Hadjicostis, A.D. Dominguez-Garcia, and N.H. Vaidya, Resilient Average Consensus in the Presence of Heterogeneous Packet Dropping Links CDC,

33 K.I. Tsianos. The role of the Network in Distributed Optimization Algorithms: Convergence Rates, Scalability, Communication / Computation Tradeoffs and Communication Delays. PhD thesis, McGill University, Dept. of Electrical and Computer Engineering, K.I. Tsianos, S. Lawlor, and M.G. Rabbat. Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning. In Proceedings of the 50th Allerton Conference on Communication, Control, and Computing, K.I. Tsianos, S. Lawlor, and M.G. Rabbat. Push-sum distributed dual averaging for convex optimization. In Proceedings of the IEEE Conference on Decision and Control, K.I. Tsianos and M.G. Rabbat. Distributed consensus and optimization under communication delays. In Proc. of Allerton Conference on Communication, Control, and Computing, pages ,

34 Time-Varying Graphs AN and Alex Olshevsky, Distributed optimization over time-varying directed graphs, IEEE TAC 60 (3) , 2015 and its rate analysis is in AN and Alex Olshevsky, Stochastic Gradient-Push for Strongly Convex Functions on Time- Varying Directed Graphs (accepted in IEEE TAC), available at 33

35 Random Gossip-Based Algorithm Consider a network with node (agent) set [m] = {1,..., m} The underlying communication graph G = ([m], E) is undirected and connected We want to solve the following convex problem problem through asynchronous updates f i (x) min x X i=1 Each agent i knows its objective f i and can compute the (sub)gradients f i (x) with some stochastic errors Agents can communicate only with their neighbors NOTE: A special case of this problem is when f i (x) = E[F i (x, ξ(x))] 34

36 Gossip Optimization Algorithm Uses Gossip Algorithm for iterate mixing Gossip model can be found in Boyd et al. 05, Dimakis et al. 07, Aysal et al. 09 Each node has a local clock ticking at Poisson rate of 1 At a tick of its clock, a node wakes up and contacts its neighbor j with probability P ij Equivalent: A single virtual clock ticking at rate m Z k - the time of the k-th tick of the virtual clock Time is discretized according to the intervals [Z k 1, Z k ) - time k I k - the index of agent waking up at time k J k - the index of a neighbor of i contacted by agent i At time k, agents I k and J k exchange information and update The other agents do nothing 35

37 Agent i information state at time k - x i k Rn Agents that do not update: x i k+1 = xi k for i {I k, J k } Update at time k: for i {I k, J k }, vk i = 1 ( ) x I k 2 k + x J k k x i k+1 = P X[vk i α ik( f(vk i ) + ɛi k )] Agents I k and J k average their values and Each locally minimizes its objective f i at x = v i k ɛ i k is subgradient error of f i at x = v i k 36

38 Alternative Representation Compact representation of agent local-update relations: for all i, m vk i = [W k ] ij x j k j=1 where x i (k + 1) = P X [vk i α ik( f(vk i ) + ɛi k )]χ {i {I k,j k }} W k = I 1 ( ) ( ) eik e Jk eik e Jk, 2 and e j is the jth unit vector in the standard Euclidean base for R n. The matrix W k has some properties that are key in the analysis W k is symmetric and stochastic - hence, doubly stochastic W k is projection matrix: W 2 k = W k E[W k ] is positive semidefinite, symmetric, doubly stochastic The second largest eigenvalue of E[W k ] is λ < 1 when the graph G = ([m], E) is connected 37

39 Assumptions The set X is compact and convex. The functions f i are convex over an open set containing X. As a consequence, the subgradients of each f i are uniformly bounded f i (x) C for all x X and i The errors ɛ i k are with zero mean and bounded norm: for all i and k, E [ ɛ i k F k 1, I k, J k ] = 0, E [ ɛ i k 2 F k 1, I k, J k ] ν for some ν > 0 where F k is the history of the method up to time k, F k = {x i 0, i [m]} {I k, J k, ɛ I k, ɛ J k l, l = 0,..., k 1}. 38

40 Key Approximation Result To study the performance of the algorithm, we first introduce the measure of disagreement among the agents estimates E [ x i k ȳ k ] with ȳ k = i=1 Lemma: Under the assumption on the errors ɛ i k, we have If the stepsize is defined through the relative frequency α i,k = 1, then with probability 1 we have: Γ k (i) k=0 j=1 1 k xi k 1 ȳ k 1 < and lim k x i k ȳ k = 0. Proposition: Consider the random stepsize given by α ik = 1 Γ i (k) number of times agent i updates up to time k, inclusively Assumptions on X, f i s and the errors ɛ i k hold x j k where Γ i(k) is the Then, the sequences {x i k }, i = 1,..., m, converge to the same optimal solution with probability 1 Proof Base Law of the iterated logarithms (for an iid process): with probability 1 for any q > 0, Γ i (k) kγ i lim = 0, k k 1/2+q where γ i is the probability that agent i updates 39

41 Aggregative Games on Graphs We have a system with N agents, where each agent i solves the following problem: N minimize f i (x i, s(x)), s(x) = subject to x i K i R n. (AggGame) f i (, ) and K i are known only to agent i j=1 x j Agent i decides on x i, and does not have access to the decisions x j of the other agents Unlike [Jensen06, MartimortStole2010], the emphasis here is on computation of an equilibrium in the absence of instantaneous access to the aggregate value x This can be achieved when the agents are able to communicate locally over a connected network 40

42 Standard algorithm N minimize f i (x i, s(x)), s(x) = subject to x i K i R n. (AggGame) Are there distributed algorithms? j=1 x j Under ideal conditions and suitable assumption, each player updates: x k+1 i = Π Ki [x k i α x i f i (x k i, s(xk )] What is the challenge then? No agent knows s(x k ) = N i=1 xk i No central entity exists to provide it. Projection based [Facchinei-Pang03, Alpcan03] and their regularized variants [Kannan-Shanbhag10] 41

43 Assumptions Basic Assumption For each i = 1,..., N, The set K i R n is compact and convex Function f i (x i, y) is continuously differentiable in (x i, y) over some open set containing the set K i K, where K = N i=1 K i The function x i f i (x i, s(x)), s(x) = N j=1 x j, is convex over the set K i. These assumptions are sufficient for the existence of an equilibrium. Define mapping Φ(x) by Φ(x) F 1 (x 1, s(x)). F N (x N, s(x)) F i (x i, s(x)) = xi f i (x i, s(x)), s(x) = N j=1 x j 42

44 Strict Monotonicity The mapping Φ(x) is strictly monotone over K 1 K N, i.e., (Φ(x) Φ(x )) T (x x ) > 0, x, x K 1 K N. Unique Equilibrium Consider the aggregative Nash game defined in (AggGame). Suppose Basic Assumptions hold. Then, the game admits a unique Nash equilibrium. Computational The mapping F i (x i, u) is uniformly Lipschitz continuous in u over K = K K N, for every fixed x i K i i.e., for some L i > 0 F i (x i, u) F i (x i, z) L i u z. 43

45 Distributed synchronous algorithm There is underlaying connected undirected communication graph G = ([N], E) [N] = {1,..., N} denotes the set of N agents E is the set of edges N i immediate neighbors of agent i Variational Inequality Formulation We want to determine the point x = (x 1,..., x N ) K such that Φ(x ), x x 0 for all x K, where Φ(x) F 1 (x 1, s(x)). F N (x N, s(x)) 44

46 F i (x i, s(x)) = xi f i (x i, s(x)), s(x) = N j=1 x j We use a suitable analog of x k+1 i = Π Ki [x k i α x i f i (x k i, s(xk )) where we distribute learning of s(x k ) in the network Standard algorithm assumes that s(x k ) is accessible to all agents Are there distributed algorithms? The challenge: No agent knows s(x k ) = N i=1 xk i No central entity exists to provide it. Projection based [Facchinei-Pang03, Alpcan03] and their regularized variants [Kannan-Shanbhag10] 45

47 Outline of the synchronous algorithm: the three steps 1. Learn aggregate: Each player maintains and updates an estimate vi k of s(x k ) = N i=1 xk i (tricky since x k j s are players private variables & are changing in time) Every player communicates its estimate v k i to the neighbors 2. Update decision: Using aligned estimate agents update their decision (x k i ) 3. Update estimate of aggregate: Agents update their estimate to reflect their current decision. Inspiration: Consensus based algorithm Tsitsiklis84, RNV

48 Algorithm 1. Learn aggregate: ˆv k i = j Ni w ij v k j, v0 i = x 0 i, w ij (k) : agent i s weight to agent j s information at step k 2. Update decision: x k+1 i = Π Ki [x k i α kf i (x k i, Nˆvk i )]. where agent i uses its estimate Nˆv i k instead of N j=1 xk j 3. Update estimate of aggregate: v k+1 i Step suggested by Ram in 2012 = ˆv k i + xk+1 i x k i J. Koshal, A. Nedic and U. V. Shanbhag Distributed Algorithms for Aggregative Games on Graphs A New Class of Distributed Optimization Algorithms: Application to Regression of Distributed Data Optimization Methods and Software 27 (1) 71 88,

49 Convergence of Synchronous Algorithm Proposition[Koshal-AN-Shanbhag 2016] Let our assumptions on the game hold. Further let the stepsize α k is chosen such that the following hold: 1. The sequence {α k } is monotonically non-increasing i.e., α k+1 α k for all k; 2. k=0 α k = ; 3. k=0 α2 k <. Then, the sequence {x k } generated by the algorithm converges to the (unique) solution x of the game. To fully decentralize the algorithm - we can use random gossip protocol Results are analog to those of the gossip for optimization 48

50 Remarks Consensus protocols are useful for tracking network wide aggregate quantities in distributed fashion Recent work in our group Bayesian learning for hypothesis testing in graphs with consensus mechanism for alignment Distributed gradient-tracking algorithms with a linear convergence rate for time-varying graphs 49

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

A Gossip Algorithm for Aggregative Games on Graphs

A Gossip Algorithm for Aggregative Games on Graphs A Gossip Algorithm for Aggregative Games on Graphs Jayash Koshal, Angelia Nedić and Uday V. Shanbhag Abstract We consider a class of games, termed as aggregative games, being played over a distributed

More information

Distributed Optimization over Random Networks

Distributed Optimization over Random Networks Distributed Optimization over Random Networks Ilan Lobel and Asu Ozdaglar Allerton Conference September 2008 Operations Research Center and Electrical Engineering & Computer Science Massachusetts Institute

More information

Convergence Rate for Consensus with Delays

Convergence Rate for Consensus with Delays Convergence Rate for Consensus with Delays Angelia Nedić and Asuman Ozdaglar October 8, 2007 Abstract We study the problem of reaching a consensus in the values of a distributed system of agents with time-varying

More information

Quantized Average Consensus on Gossip Digraphs

Quantized Average Consensus on Gossip Digraphs Quantized Average Consensus on Gossip Digraphs Hideaki Ishii Tokyo Institute of Technology Joint work with Kai Cai Workshop on Uncertain Dynamical Systems Udine, Italy August 25th, 2011 Multi-Agent Consensus

More information

Distributed Consensus over Network with Noisy Links

Distributed Consensus over Network with Noisy Links Distributed Consensus over Network with Noisy Links Behrouz Touri Department of Industrial and Enterprise Systems Engineering University of Illinois Urbana, IL 61801 touri1@illinois.edu Abstract We consider

More information

Consensus-Based Distributed Optimization with Malicious Nodes

Consensus-Based Distributed Optimization with Malicious Nodes Consensus-Based Distributed Optimization with Malicious Nodes Shreyas Sundaram Bahman Gharesifard Abstract We investigate the vulnerabilities of consensusbased distributed optimization protocols to nodes

More information

arxiv: v2 [cs.dc] 2 May 2018

arxiv: v2 [cs.dc] 2 May 2018 On the Convergence Rate of Average Consensus and Distributed Optimization over Unreliable Networks Lili Su EECS, MI arxiv:1606.08904v2 [cs.dc] 2 May 2018 Abstract. We consider the problems of reaching

More information

Convergence Rates in Decentralized Optimization. Alex Olshevsky Department of Electrical and Computer Engineering Boston University

Convergence Rates in Decentralized Optimization. Alex Olshevsky Department of Electrical and Computer Engineering Boston University Convergence Rates in Decentralized Optimization Alex Olshevsky Department of Electrical and Computer Engineering Boston University Distributed and Multi-agent Control Strong need for protocols to coordinate

More information

ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS. Michael G. Rabbat

ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS. Michael G. Rabbat ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS Michael G. Rabbat Dept. of Electrical and Computer Engineering McGill University, Montréal, Québec Email: michael.rabbat@mcgill.ca ABSTRACT This paper

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Push-sum Algorithm on Time-varying Random Graphs

Push-sum Algorithm on Time-varying Random Graphs Push-sum Algorithm on Time-varying Random Graphs by Pouya Rezaeinia A thesis submitted to the Department of Mathematics and Statistics in conformity with the requirements for the degree of Master of Applied

More information

Asymptotics, asynchrony, and asymmetry in distributed consensus

Asymptotics, asynchrony, and asymmetry in distributed consensus DANCES Seminar 1 / Asymptotics, asynchrony, and asymmetry in distributed consensus Anand D. Information Theory and Applications Center University of California, San Diego 9 March 011 Joint work with Alex

More information

Fast Algorithms for Distributed Optimization over Time-varying Graphs

Fast Algorithms for Distributed Optimization over Time-varying Graphs Fast Algorithms for Distributed Optimization over Time-varying Graphs Angelia.Nedich@asu.edu School of Electrical, Computer, and Energy Engineering Arizona State University at Tempe Collaborative work

More information

Constrained Consensus and Optimization in Multi-Agent Networks

Constrained Consensus and Optimization in Multi-Agent Networks Constrained Consensus Optimization in Multi-Agent Networks The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Learning distributions and hypothesis testing via social learning

Learning distributions and hypothesis testing via social learning UMich EECS 2015 1 / 48 Learning distributions and hypothesis testing via social learning Anand D. Department of Electrical and Computer Engineering, The State University of New Jersey September 29, 2015

More information

Decentralized Consensus Optimization with Asynchrony and Delay

Decentralized Consensus Optimization with Asynchrony and Delay Decentralized Consensus Optimization with Asynchrony and Delay Tianyu Wu, Kun Yuan 2, Qing Ling 3, Wotao Yin, and Ali H. Sayed 2 Department of Mathematics, 2 Department of Electrical Engineering, University

More information

arxiv: v1 [math.oc] 24 Dec 2018

arxiv: v1 [math.oc] 24 Dec 2018 On Increasing Self-Confidence in Non-Bayesian Social Learning over Time-Varying Directed Graphs César A. Uribe and Ali Jadbabaie arxiv:82.989v [math.oc] 24 Dec 28 Abstract We study the convergence of the

More information

Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks

Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks Ercan Yildiz 1, Anna Scaglione 2, Asuman Ozdaglar 3 Cornell University, UC Davis, MIT Abstract In this paper we consider the

More information

Discrete-time Consensus Filters on Directed Switching Graphs

Discrete-time Consensus Filters on Directed Switching Graphs 214 11th IEEE International Conference on Control & Automation (ICCA) June 18-2, 214. Taichung, Taiwan Discrete-time Consensus Filters on Directed Switching Graphs Shuai Li and Yi Guo Abstract We consider

More information

Asynchronous Non-Convex Optimization For Separable Problem

Asynchronous Non-Convex Optimization For Separable Problem Asynchronous Non-Convex Optimization For Separable Problem Sandeep Kumar and Ketan Rajawat Dept. of Electrical Engineering, IIT Kanpur Uttar Pradesh, India Distributed Optimization A general multi-agent

More information

On Quantized Consensus by Means of Gossip Algorithm Part I: Convergence Proof

On Quantized Consensus by Means of Gossip Algorithm Part I: Convergence Proof Submitted, 9 American Control Conference (ACC) http://www.cds.caltech.edu/~murray/papers/8q_lm9a-acc.html On Quantized Consensus by Means of Gossip Algorithm Part I: Convergence Proof Javad Lavaei and

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time Hegselmann-Krause Dynamics: An Upper Bound on Termination Time B. Touri Coordinated Science Laboratory University of Illinois Urbana, IL 680 touri@illinois.edu A. Nedić Industrial and Enterprise Systems

More information

Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case

Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case 0 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 9 - July 0, 0 Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case Jie Lu and Choon

More information

Degree Fluctuations and the Convergence Time of Consensus Algorithms

Degree Fluctuations and the Convergence Time of Consensus Algorithms IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL.??, NO.??,?? Degree Fluctuations and the Convergence Time of Consensus Algorithms Alex Olshevsky and John N. Tsitsiklis, Fellow, IEEE. Abstract We consider a

More information

Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices

Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices Florence Bénézit ENS-INRIA, France Vincent Blondel UCL, Belgium Patric Thiran EPFL, Switzerland John Tsitsilis MIT, USA Martin

More information

On Distributed Averaging Algorithms and Quantization Effects

On Distributed Averaging Algorithms and Quantization Effects 2506 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 11, NOVEMBER 2009 On Distributed Averaging Algorithms and Quantization Effects Angelia Nedić, Alex Olshevsky, Asuman Ozdaglar, Member, IEEE, and

More information

Resilient Distributed Optimization Algorithm against Adversary Attacks

Resilient Distributed Optimization Algorithm against Adversary Attacks 207 3th IEEE International Conference on Control & Automation (ICCA) July 3-6, 207. Ohrid, Macedonia Resilient Distributed Optimization Algorithm against Adversary Attacks Chengcheng Zhao, Jianping He

More information

Distributed online optimization over jointly connected digraphs

Distributed online optimization over jointly connected digraphs Distributed online optimization over jointly connected digraphs David Mateos-Núñez Jorge Cortés University of California, San Diego {dmateosn,cortes}@ucsd.edu Mathematical Theory of Networks and Systems

More information

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time Submitted, 009 American Control Conference (ACC) http://www.cds.caltech.edu/~murray/papers/008q_lm09b-acc.html On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time Javad Lavaei

More information

Convergence rates for distributed stochastic optimization over random networks

Convergence rates for distributed stochastic optimization over random networks Convergence rates for distributed stochastic optimization over random networs Dusan Jaovetic, Dragana Bajovic, Anit Kumar Sahu and Soummya Kar Abstract We establish the O ) convergence rate for distributed

More information

Time Synchronization in WSNs: A Maximum Value Based Consensus Approach

Time Synchronization in WSNs: A Maximum Value Based Consensus Approach 211 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 12-15, 211 Time Synchronization in WSNs: A Maximum Value Based Consensus Approach Jianping

More information

Distributed Multiuser Optimization: Algorithms and Error Analysis

Distributed Multiuser Optimization: Algorithms and Error Analysis Distributed Multiuser Optimization: Algorithms and Error Analysis Jayash Koshal, Angelia Nedić and Uday V. Shanbhag Abstract We consider a class of multiuser optimization problems in which user interactions

More information

ADD-OPT: Accelerated Distributed Directed Optimization

ADD-OPT: Accelerated Distributed Directed Optimization 1 ADD-OPT: Accelerated Distributed Directed Optimization Chenguang Xi, Student Member, IEEE, and Usman A. Khan, Senior Member, IEEE Abstract This paper considers distributed optimization problems where

More information

arxiv: v1 [math.oc] 29 Sep 2018

arxiv: v1 [math.oc] 29 Sep 2018 Distributed Finite-time Least Squares Solver for Network Linear Equations Tao Yang a, Jemin George b, Jiahu Qin c,, Xinlei Yi d, Junfeng Wu e arxiv:856v mathoc 9 Sep 8 a Department of Electrical Engineering,

More information

Randomized Gossip Algorithms

Randomized Gossip Algorithms 2508 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 6, JUNE 2006 Randomized Gossip Algorithms Stephen Boyd, Fellow, IEEE, Arpita Ghosh, Student Member, IEEE, Balaji Prabhakar, Member, IEEE, and Devavrat

More information

Consensus Problems on Small World Graphs: A Structural Study

Consensus Problems on Small World Graphs: A Structural Study Consensus Problems on Small World Graphs: A Structural Study Pedram Hovareshti and John S. Baras 1 Department of Electrical and Computer Engineering and the Institute for Systems Research, University of

More information

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies Shreyas Sundaram and Christoforos N Hadjicostis Abstract We present a method for achieving consensus in distributed systems in

More information

Fastest Distributed Consensus Averaging Problem on. Chain of Rhombus Networks

Fastest Distributed Consensus Averaging Problem on. Chain of Rhombus Networks Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks Saber Jafarizadeh Department of Electrical Engineering Sharif University of Technology, Azadi Ave, Tehran, Iran Email: jafarizadeh@ee.sharif.edu

More information

Convergence of Rule-of-Thumb Learning Rules in Social Networks

Convergence of Rule-of-Thumb Learning Rules in Social Networks Convergence of Rule-of-Thumb Learning Rules in Social Networks Daron Acemoglu Department of Economics Massachusetts Institute of Technology Cambridge, MA 02142 Email: daron@mit.edu Angelia Nedić Department

More information

Accuracy and Decision Time for Decentralized Implementations of the Sequential Probability Ratio Test

Accuracy and Decision Time for Decentralized Implementations of the Sequential Probability Ratio Test 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 ThA1.3 Accuracy Decision Time for Decentralized Implementations of the Sequential Probability Ratio Test Sra Hala

More information

Convergence in Multiagent Coordination, Consensus, and Flocking

Convergence in Multiagent Coordination, Consensus, and Flocking Convergence in Multiagent Coordination, Consensus, and Flocking Vincent D. Blondel, Julien M. Hendrickx, Alex Olshevsky, and John N. Tsitsiklis Abstract We discuss an old distributed algorithm for reaching

More information

Quantized average consensus via dynamic coding/decoding schemes

Quantized average consensus via dynamic coding/decoding schemes Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec 9-, 2008 Quantized average consensus via dynamic coding/decoding schemes Ruggero Carli Francesco Bullo Sandro Zampieri

More information

On the Linear Convergence of Distributed Optimization over Directed Graphs

On the Linear Convergence of Distributed Optimization over Directed Graphs 1 On the Linear Convergence of Distributed Optimization over Directed Graphs Chenguang Xi, and Usman A. Khan arxiv:1510.0149v1 [math.oc] 7 Oct 015 Abstract This paper develops a fast distributed algorithm,

More information

Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

Decentralized Quadratically Approximated Alternating Direction Method of Multipliers Decentralized Quadratically Approimated Alternating Direction Method of Multipliers Aryan Mokhtari Wei Shi Qing Ling Alejandro Ribeiro Department of Electrical and Systems Engineering, University of Pennsylvania

More information

Distributed online optimization over jointly connected digraphs

Distributed online optimization over jointly connected digraphs Distributed online optimization over jointly connected digraphs David Mateos-Núñez Jorge Cortés University of California, San Diego {dmateosn,cortes}@ucsd.edu Southern California Optimization Day UC San

More information

Distributed Averaging in Sensor Networks Based on Broadcast Gossip Algorithms

Distributed Averaging in Sensor Networks Based on Broadcast Gossip Algorithms 1 Distributed Averaging in Sensor Networks Based on Broadcast Gossip Algorithms Mauro Franceschelli, Alessandro Giua and Carla Seatzu. Abstract In this paper we propose a new decentralized algorithm to

More information

Fast Linear Iterations for Distributed Averaging 1

Fast Linear Iterations for Distributed Averaging 1 Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider

More information

Network Optimization with Heuristic Rational Agents

Network Optimization with Heuristic Rational Agents Network Optimization with Heuristic Rational Agents Ceyhun Eksin and Alejandro Ribeiro Department of Electrical and Systems Engineering, University of Pennsylvania {ceksin, aribeiro}@seas.upenn.edu Abstract

More information

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 4, AUGUST

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 4, AUGUST IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 4, AUGUST 2011 833 Non-Asymptotic Analysis of an Optimal Algorithm for Network-Constrained Averaging With Noisy Links Nima Noorshams Martin

More information

Consensus Protocols for Networks of Dynamic Agents

Consensus Protocols for Networks of Dynamic Agents Consensus Protocols for Networks of Dynamic Agents Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 e-mail: {olfati,murray}@cds.caltech.edu

More information

Decoupling Coupled Constraints Through Utility Design

Decoupling Coupled Constraints Through Utility Design 1 Decoupling Coupled Constraints Through Utility Design Na Li and Jason R. Marden Abstract The central goal in multiagent systems is to design local control laws for the individual agents to ensure that

More information

Formation Control of Nonholonomic Mobile Robots

Formation Control of Nonholonomic Mobile Robots Proceedings of the 6 American Control Conference Minneapolis, Minnesota, USA, June -6, 6 FrC Formation Control of Nonholonomic Mobile Robots WJ Dong, Yi Guo, and JA Farrell Abstract In this paper, formation

More information

Recent Advances in Consensus of Multi-Agent Systems

Recent Advances in Consensus of Multi-Agent Systems Int. Workshop on Complex Eng. Systems and Design, Hong Kong, 22 May 2009 Recent Advances in Consensus of Multi-Agent Systems Jinhu Lü Academy of Mathematics and Systems Science Chinese Academy of Sciences

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Dynamic Coalitional TU Games: Distributed Bargaining among Players Neighbors

Dynamic Coalitional TU Games: Distributed Bargaining among Players Neighbors Dynamic Coalitional TU Games: Distributed Bargaining among Players Neighbors Dario Bauso and Angelia Nedić January 20, 2011 Abstract We consider a sequence of transferable utility (TU) games where, at

More information

E-Companion to The Evolution of Beliefs over Signed Social Networks

E-Companion to The Evolution of Beliefs over Signed Social Networks OPERATIONS RESEARCH INFORMS E-Companion to The Evolution of Beliefs over Signed Social Networks Guodong Shi Research School of Engineering, CECS, The Australian National University, Canberra ACT 000, Australia

More information

On Distributed Optimization using Generalized Gossip

On Distributed Optimization using Generalized Gossip For presentation in CDC 015 On Distributed Optimization using Generalized Gossip Zhanhong Jiang Soumik Sarkar Kushal Mukherjee zhjiang@iastate.edu soumiks@iastate.edu MukherK@utrc.utc.com Department of

More information

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Cameron Nowzari Jorge Cortés Abstract This paper studies a distributed event-triggered communication and

More information

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective ABSTRACT Shreyas Sundaram Coordinated Science Laboratory and Dept of Electrical and Computer Engineering

More information

On Backward Product of Stochastic Matrices

On Backward Product of Stochastic Matrices On Backward Product of Stochastic Matrices Behrouz Touri and Angelia Nedić 1 Abstract We study the ergodicity of backward product of stochastic and doubly stochastic matrices by introducing the concept

More information

Complex Laplacians and Applications in Multi-Agent Systems

Complex Laplacians and Applications in Multi-Agent Systems 1 Complex Laplacians and Applications in Multi-Agent Systems Jiu-Gang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complex-valued Laplacians have been shown to be powerful

More information

Distributed Consensus Optimization

Distributed Consensus Optimization Distributed Consensus Optimization Ming Yan Michigan State University, CMSE/Mathematics September 14, 2018 Decentralized-1 Backgroundwhy andwe motivation need decentralized optimization? I Decentralized

More information

Convergence speed in distributed consensus and averaging

Convergence speed in distributed consensus and averaging Convergence speed in distributed consensus and averaging The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Almost Sure Convergence to Consensus in Markovian Random Graphs

Almost Sure Convergence to Consensus in Markovian Random Graphs Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 2008 Almost Sure Convergence to Consensus in Markovian Random Graphs Ion Matei, Nuno Martins and John S. Baras

More information

Consensus Problems in Networks of Agents with Switching Topology and Time-Delays

Consensus Problems in Networks of Agents with Switching Topology and Time-Delays Consensus Problems in Networks of Agents with Switching Topology and Time-Delays Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology e-mails: {olfati,murray}@cds.caltech.edu

More information

An Unbiased Kalman Consensus Algorithm

An Unbiased Kalman Consensus Algorithm Proceedings of the 2006 American Control Conference Minneapolis, Minnesota, USA, June 14-16, 2006 ThC01.1 An Unbiased Kalman Consensus Algorithm Mehdi Alighanbari and Jonathan P. How Aerospace Controls

More information

A Price-Based Approach for Controlling Networked Distributed Energy Resources

A Price-Based Approach for Controlling Networked Distributed Energy Resources A Price-Based Approach for Controlling Networked Distributed Energy Resources Alejandro D. Domínguez-García (joint work with Bahman Gharesifard and Tamer Başar) Coordinated Science Laboratory Department

More information

DISTRIBUTED GENERALIZED LIKELIHOOD RATIO TESTS : FUNDAMENTAL LIMITS AND TRADEOFFS. Anit Kumar Sahu and Soummya Kar

DISTRIBUTED GENERALIZED LIKELIHOOD RATIO TESTS : FUNDAMENTAL LIMITS AND TRADEOFFS. Anit Kumar Sahu and Soummya Kar DISTRIBUTED GENERALIZED LIKELIHOOD RATIO TESTS : FUNDAMENTAL LIMITS AND TRADEOFFS Anit Kumar Sahu and Soummya Kar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh

More information

Node-based Distributed Optimal Control of Wireless Networks

Node-based Distributed Optimal Control of Wireless Networks Node-based Distributed Optimal Control of Wireless Networks CISS March 2006 Edmund M. Yeh Department of Electrical Engineering Yale University Joint work with Yufang Xi Main Results Unified framework for

More information

Information Aggregation in Complex Dynamic Networks

Information Aggregation in Complex Dynamic Networks The Combined 48 th IEEE Conference on Decision and Control and 28 th Chinese Control Conference Information Aggregation in Complex Dynamic Networks Ali Jadbabaie Skirkanich Associate Professor of Innovation

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, Shaochuan Wu and Michael G. Rabbat. A. Related Work

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, Shaochuan Wu and Michael G. Rabbat. A. Related Work IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, 2013 3959 Broadcast Gossip Algorithms for Consensus on Strongly Connected Digraphs Shaochuan Wu and Michael G. Rabbat Abstract We study

More information

DLM: Decentralized Linearized Alternating Direction Method of Multipliers

DLM: Decentralized Linearized Alternating Direction Method of Multipliers 1 DLM: Decentralized Linearized Alternating Direction Method of Multipliers Qing Ling, Wei Shi, Gang Wu, and Alejandro Ribeiro Abstract This paper develops the Decentralized Linearized Alternating Direction

More information

1520 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 9, SEPTEMBER Reza Olfati-Saber, Member, IEEE, and Richard M. Murray, Member, IEEE

1520 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 9, SEPTEMBER Reza Olfati-Saber, Member, IEEE, and Richard M. Murray, Member, IEEE 1520 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 9, SEPTEMBER 2004 Consensus Problems in Networks of Agents With Switching Topology and Time-Delays Reza Olfati-Saber, Member, IEEE, and Richard

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

Supplementary Material to Distributed Consensus-based Weight Design for Cooperative Spectrum Sensing

Supplementary Material to Distributed Consensus-based Weight Design for Cooperative Spectrum Sensing IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. X, NO. X, XXX 20XX Supplementary Material to Distributed Consensus-based Weight Design for Cooperative Spectrum Sensing Wenlin Zhang, Yi Guo,

More information

Asynchronous Distributed Averaging on. communication networks.

Asynchronous Distributed Averaging on. communication networks. 1 Asynchronous Distributed Averaging on Communication Networks Mortada Mehyar, Demetri Spanos, John Pongsajapan, Steven H. Low, Richard M. Murray California Institute of Technology {morr, demetri, pongsaja,

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Themistoklis Charalambous, Member, IEEE, Michael G. Rabbat, Member, IEEE, Mikael Johansson, Member,

More information

Spread of (Mis)Information in Social Networks

Spread of (Mis)Information in Social Networks Spread of (Mis)Information in Social Networks Daron Acemoglu, Asuman Ozdaglar, and Ali ParandehGheibi May 9, 009 Abstract We provide a model to investigate the tension between information aggregation and

More information

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Michael Rabbat Joint with: Syamantak Datta Gupta and Mark Coates (McGill), and Stephane Blouin (DRDC)

More information

Consensus of Hybrid Multi-agent Systems

Consensus of Hybrid Multi-agent Systems Consensus of Hybrid Multi-agent Systems Yuanshi Zheng, Jingying Ma, and Long Wang arxiv:52.0389v [cs.sy] 0 Dec 205 Abstract In this paper, we consider the consensus problem of hybrid multi-agent system.

More information

Consensus of Information Under Dynamically Changing Interaction Topologies

Consensus of Information Under Dynamically Changing Interaction Topologies Consensus of Information Under Dynamically Changing Interaction Topologies Wei Ren and Randal W. Beard Abstract This paper considers the problem of information consensus among multiple agents in the presence

More information

Finite-Field Consensus

Finite-Field Consensus Finite-Field Consensus Fabio Pasqualetti, Domenica Borra, and Francesco Bullo Abstract This work studies consensus networks over finite fields, where agents process and communicate values from the set

More information

Convergence in Multiagent Coordination, Consensus, and Flocking

Convergence in Multiagent Coordination, Consensus, and Flocking Convergence in Multiagent Coordination, Consensus, and Flocking Vincent D. Blondel, Julien M. Hendrickx, Alex Olshevsky, and John N. Tsitsiklis Abstract We discuss an old distributed algorithm for reaching

More information

An Upper Bound on the Convergence Time for Quantized Consensus

An Upper Bound on the Convergence Time for Quantized Consensus An Upper Bound on the Convergence Time for Quantized Consensus 1 Shang Shang, Paul W. Cuff, Pan Hui and Sanjeev R. Kulkarni Department of Electrical Engineering, Princeton University arxiv:1208.0788v2

More information

An approximate dual subgradient algorithm for multi-agent non-convex optimization

An approximate dual subgradient algorithm for multi-agent non-convex optimization 49th IEEE Conference on Decision and Control December 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA An approximate dual subgradient algorithm for multi-agent non-convex optimization Minghui Zhu and

More information

Consensus, Flocking and Opinion Dynamics

Consensus, Flocking and Opinion Dynamics Consensus, Flocking and Opinion Dynamics Antoine Girard Laboratoire Jean Kuntzmann, Université de Grenoble antoine.girard@imag.fr International Summer School of Automatic Control GIPSA Lab, Grenoble, France,

More information

Distributed intelligence in multi agent systems

Distributed intelligence in multi agent systems Distributed intelligence in multi agent systems Usman Khan Department of Electrical and Computer Engineering Tufts University Workshop on Distributed Optimization, Information Processing, and Learning

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models

Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models 1 Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models S. Sundhar Ram, V. V. Veeravalli, and A. Nedić Abstract We consider a network of sensors deployed to sense a spatio-temporal

More information

Threshold Policy for Global Games with Noisy Information Sharing

Threshold Policy for Global Games with Noisy Information Sharing 05 IEEE 54th Annual Conference on Decision and Control CDC December 5-8, 05 Osaka, Japan Threshold olicy for Global Games with Noisy Information Sharing Hessam Mahdavifar, Ahmad Beirami, Behrouz Touri,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Consensus Optimization with Delayed and Stochastic Gradients on Decentralized Networks

Consensus Optimization with Delayed and Stochastic Gradients on Decentralized Networks 06 IEEE International Conference on Big Data (Big Data) Consensus Optimization with Delayed and Stochastic Gradients on Decentralized Networks Benjamin Sirb Xiaojing Ye Department of Mathematics and Statistics

More information

Hamiltonian Quantized Gossip

Hamiltonian Quantized Gossip 1 Hamiltonian Quantized Gossip Mauro Franceschelli, Alessandro Giua, Carla Seatzu Abstract The main contribution of this paper is an algorithm to solve the quantized consensus problem over networks represented

More information

Asymmetric Randomized Gossip Algorithms for Consensus

Asymmetric Randomized Gossip Algorithms for Consensus Asymmetric Randomized Gossip Algorithms for Consensus F. Fagnani S. Zampieri Dipartimento di Matematica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy, (email: fabio.fagnani@polito.it).

More information

Distributed Computation of Quantiles via ADMM

Distributed Computation of Quantiles via ADMM 1 Distributed Computation of Quantiles via ADMM Franck Iutzeler Abstract In this paper, we derive distributed synchronous and asynchronous algorithms for computing uantiles of the agents local values.

More information

A Distributed Newton Method for Network Utility Maximization

A Distributed Newton Method for Network Utility Maximization A Distributed Newton Method for Networ Utility Maximization Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie Abstract Most existing wor uses dual decomposition and subgradient methods to solve Networ Utility

More information