Distributed Computation of Quantiles via ADMM

Similar documents
Decentralized Consensus Optimization with Asynchrony and Delay

Asynchronous Non-Convex Optimization For Separable Problem

ARock: an algorithmic framework for asynchronous parallel coordinate updates

Proximal splitting methods on convex problems with a quadratic term: Relax!

Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

Distributed Optimization over Networks Gossip-Based Algorithms

ADMM and Fast Gradient Methods for Distributed Optimization

Distributed Optimization via Alternating Direction Method of Multipliers

A Primal-dual Three-operator Splitting Scheme

Distributed Smooth and Strongly Convex Optimization with Inexact Dual Methods

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

A Unified Approach to Proximal Algorithms using Bregman Distance

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

The Alternating Direction Method of Multipliers

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

Primal-dual coordinate descent

On Quantized Consensus by Means of Gossip Algorithm Part I: Convergence Proof

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Asymptotics, asynchrony, and asymmetry in distributed consensus

Coordinate Update Algorithm Short Course Operator Splitting

ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS. Michael G. Rabbat

Fastest Distributed Consensus Averaging Problem on. Chain of Rhombus Networks

Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Preconditioning via Diagonal Scaling

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

Consensus-Based Distributed Optimization with Malicious Nodes

arxiv: v1 [math.oc] 23 May 2017

Dual Ascent. Ryan Tibshirani Convex Optimization

Operator Splitting for Parallel and Distributed Optimization

The Multi-Path Utility Maximization Problem

Fully-distributed spectrum sensing: application to cognitive radio

Splitting methods for decomposing separable convex programs

Primal-dual algorithms for the sum of two and three functions 1

Distributed Optimization over Random Networks

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Tight Rates and Equivalence Results of Operator Splitting Schemes

Adaptive Primal Dual Optimization for Image Processing and Learning

Asynchronous Algorithms for Conic Programs, including Optimal, Infeasible, and Unbounded Ones

Distributed Consensus Optimization

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

Dual and primal-dual methods

Convergence Rate for Consensus with Delays

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Resilient Distributed Optimization Algorithm against Adversary Attacks

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

An Optimization-based Approach to Decentralized Assignability

Network Optimization with Heuristic Rational Agents

About Split Proximal Algorithms for the Q-Lasso

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Asynchronous Parallel Computing in Signal Processing and Machine Learning

Optimization methods

ARock: an Algorithmic Framework for Asynchronous Parallel Coordinate Updates

Randomized Gossip Algorithms for Solving Laplacian Systems

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems?

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization

Randomized Coordinate Descent with Arbitrary Sampling: Algorithms and Complexity

Quantized Average Consensus on Gossip Digraphs

arxiv: v4 [math.oc] 5 Jan 2016

arxiv: v4 [math.oc] 29 Jan 2018

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

Randomized Gossip Algorithms

A Gossip Algorithm for Aggregative Games on Graphs

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

EUSIPCO

An ADMM algorithm for optimal sensor and actuator selection

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

On Optimal Frame Conditioners

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Sparse and Regularized Optimization

Learning with stochastic proximal gradient

Distributed online optimization over jointly connected digraphs

arxiv: v1 [math.oc] 24 Dec 2018

ADMM for monotone operators: convergence analysis and rates

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Consensus Optimization with Delayed and Stochastic Gradients on Decentralized Networks

Stochastic dynamical modeling:

Distributed online optimization over jointly connected digraphs

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting

BLOCK ALTERNATING OPTIMIZATION FOR NON-CONVEX MIN-MAX PROBLEMS: ALGORITHMS AND APPLICATIONS IN SIGNAL PROCESSING AND COMMUNICATIONS

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

On the order of the operators in the Douglas Rachford algorithm

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

minimize x subject to (x 2)(x 4) u,

Constrained Consensus and Optimization in Multi-Agent Networks

A Distributed Newton Method for Network Utility Maximization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.18

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Communication Cost of Distributed Computing

Transcription:

1 Distributed Computation of Quantiles via ADMM Franck Iutzeler Abstract In this paper, we derive distributed synchronous and asynchronous algorithms for computing uantiles of the agents local values. These algorithms are based on the formulation of a suitable problem, explicitly solvable by the Alternating Direction Method of Multipliers (ADMM), and recent randomized optimization methods. Index Terms Quantile, Median, Gossiping, ADMM, Distributed Algorithms. I. INTRODUCTION Consider a connected network of N agents, each with a scalar value (a i ) i=1,..,n. The problem of distributively computing a function of these values by exchanging information locally over the network has received a lot of interest [1], [], [3]. Particularly, solving this problem in an asynchronous way attracted the attention of the community, notably with applications to Wireless Sensor Networks [4]. The most famous gossiping problems include average computation [4], [5], [6], [7], [8], greatest value spreading [9], [1], [11], or some more involved means (see [1] and references therein). However, to the best of our knowledge, no algorithm for uantiles or median computation are available when they can have great potential applications notably in the field of distributed statistical signal processing. Indeed, the median for instance is a more robust estimator of the central tendency than the average, and uantiles are generally more robust than variance when looking at dispersion. Recently, distributed optimization algorithms were developed [13], [14], along with randomized, asynchronous versions [15], [16]. These methods are based on formulating proper distributed optimization problems, and solving them using splitting-based optimization methods such as the popular Alternating Direction Method of Multipliers (ADMM) [17], [18], [19], or Primal-Dual algorithms [], [1]; then, randomized versions are derived using suited coordinate descent [15], []. In this note, we first design a convex optimization problem whose solution meets the sought uantile to compute. Then, in Section III, we use a distributed formulation of ADMM to derive a distributed algorithm for uantile computation. In Section IV, an asynchronous version which communication scheme mimics Random Gossip [4] is proposed. Finally, we illustrate the relevance of our algorithms in Section V. II. QUANTILE FINDING AS AN OPTIMIZATION PROBLEM Let a = (a i ) i=1,..,n be a size-n real vector and s be a permutation that sorts a in increasing order. For a given F. Iutzeler is with Laboratoire Jean Kuntzmann, Université Grenoble Alpes, Grenoble, France [, 1], our goal is to find a % uantile estimator of vector a in the sense that we wish to find a real value a such that a [ ] a s( N ) ; a s( N ) if [1/N, 1[ and a min i a i if < 1/N a max i a i if = 1. (1) Note that the degenerate cases where N < 1 and N = N can also boil down to finding the minimum and the maximum of the agents values respectively (see e.g. [11]). In this section, we formulate an optimization problem, based on proposed uantile objective functions, which solutions verify the sought conditions of E. (1). A. Objective functions Let us define uantile objective functions (f a ) a, as fa : R R { (a x) if x < a x fa (x) = x a if x a which rely on two parameters: a R; a point of the set; > ; a scale parameter. It is easy to see that for any a R, >, fa is convex and continuous. In addition, its proximity operator can be explicitly computed for any γ > and any z R as (See Appendix A) { prox γf a (z) arg min fa (w) + 1 } w R γ w z z + γ if z < a γ = z γ if z > a + γ. (3) a if z [a γ; a + γ] The fact that fa is not differentiable but has an explicit proximal operator naturally leads us to consider proximal minimization algorithms like ADMM or primal-dual algorithms for problems involving such functions (see [3, Chap. 7]). B. Euivalent Problem Consider the optimization problem: () min f(x) N fa x R i (x). (4) Lemma 1: Let = 1. Then, the minimizers of Pb. (4) verify E. (1). The proof is reported in Appendix B. Notice that, although other choices for are possible, we emphasize the fact that the present one do not necessitates any knowledge of the total number of agents N which is a very attractive property in practical networked systems. i=1

III. DISTRIBUTED QUANTILE COMPUTATION Consider a network of N agents linked through an undirected connected graph G = ({1,.., N}, E) where E is the set of edges: E = {e = {i, j} : i and j are connected}. N i is the neighborhood of i, that is the agents j such that {i, j} E (i / N i ), and d i = N i is the degree of i. Let each agent i have a (local) point a i R; the objective of this section is to design an algorithm that distributively computes a % uantile estimator of the vector (a i ) i=1,..,n. To do so, we proceed as such: i) we reformulate Problem (4) so that it is distributed-ready; and ii) apply ADMM on it. Distributed formulation. Each agent i has a local point a i and thus can maintain local function f a i. To lead to distributed algorithms, the new problem must feature a local variable x(i) R at each agent/function i; thus, in order to recover the solutions of Prob. 4, one must add a consensus constraint x(1) = x() =.. = x(n). There are several ways to formulate consensus constraints over graphs [13], [4], [16]; for brevity and in line with our target algorithms, we proceed as in [4]: for all edges e = {i, j} E, we create a size- additional variable y(e) = [y 1 (e), y (e)] T R such that y 1 (e) = x(i) and y (e) = x(j). Then, imposing y 1 (e) = y (e) for all e E, by adding in the cost function a convex indicator function ι(y(e)) = if y 1 (e) = y (e) and + elsewhere, is the same as imposing the consensus as soon as the graph is connected. For clarity, at edge e = {i, j} we note y 1 (e) = y(i, e) if i < j and y (e) = y(i, e) elsewhere. We note x = [x(1),.., x(n)] T R N and y = [y(1),.., y( E )] T R E ; similarly, we will adopt the same notation for any variable with the same size. Then, our distributed-ready problem reads: min x R N,y R E N fa i (x(i)) + ι(y(e)) i=1 e E }{{}}{{} F (x) G(y) s.t. Mx = y where M= M 1. M E with M e the N matrix such that M(1, i) = M(, j) = 1 if e = {i, j} and zeros elsewhere, so that M e x = [x(i), x(j)] T = y(e). This makes M size E N and full column-rank. Algorithm. As F and G have explicit proximal operators, it is natural to use ADMM on Pb. (5), which leads to the following algorithm after simplification of some variables. Distributed uantile computation Init.: x, x, z, λ R N, ρ >, = At each iteration k: Each agent i compute 1. z k (i) = x k(i) + x k (i) λ k (i) x k+1 (i) = prox f ai /(ρd (z i) k(i)) z k (i) + = if z k (i) < a i z k (i) 1 if z k (i) > a i + 1 a i elsewhere (5) All agents send their version of x k+1 to their neighbors x k+1 (i) = 1 x k+1 (j) d i j N i Each agent i update λ k+1 (i) = λ k (i) + x k+1(i) x k+1 (i) Theorem 1: Let ], 1[, = 1, and ρ >. The Distributed uantile computation algorithm converges to a consensus over a value verifying E. (1). x k (x, x,.., x ) with x verifying E. (1). Proof: This theorem comes from the succession of three results: i) the Distributed uantile computation algorithm is an instantiation of the ADMM on Prob. (5) of the form min x,y F (x) + G(y) s.t. Mx = y with F and G convex; so it converges the algorithm converges to a solution of (5) [17], [18]; ii) by construction of Prob. (5), its solutions have the form [x, x,.., x ] T where x is a solution of Prob. (4); and iii) by Lemma 1, the solutions of (4) verify E. (1). Remark 1: An efficient initialization of the algorithm is to set x (i) = x (i) = a i. λ can be taken as the null vector, and ρ 1. This kind of initialization is performed in the numerical experiments, along with discussions over the choice of ρ. IV. ASYNCHRONOUS GOSSIPING FOR QUANTILE ESTIMATION Randomized versions of the ADMM have been introduced in [15] in order to produce asynchronous gossip-like distributed algorithms; this kind of result was later extended to more general algorithms in [], [5]. In this section, we build upon the distributed problem E. (5) and a careful application of the Asynchronous Distributed ADMM of [15] to produce a gossip-based algorithm for uantile estimation. As our formulation is edge-based, at each iteration k, we select an edge e k = (i, j) E following an i.i.d. random process; then, only agents i and j update and exchange during this iteration (see [15] for derivation details). This kind of gossiping (drawing two nodes who average their values) is similar to the Random Gossip averaging algorithm [4]. Asynchronous Gossip for uantile computation Init.: x, z R N, x R E, λ R E, ρ >, = At each iteration k, draw an edge e k = {i, j} E: Agents v {i, j} compute 1. z k (v) = 1 ( x k ({v, n}) λ k (v, {v, n}) d v n N v x k+1 (v) = prox f av /(ρd (z v) k(v)) z k (v) + if z k (v) < a v = z k (v) 1 if z k (v) > a v + 1 a v elsewhere Agents i and j exchange their copy of x k+1 x k+1 (e k ) = x k+1(i) + x k+1 (j)

3 Agents v {i, j} update λ k+1 (v, e k ) = λ k (v, e k ) + ρ(x k+1 (v) x k+1 (e k )) Theorem : Let ], 1[, = 1, and ρ >. If (e k) is an i.i.d. seuence valued in E such that for all e E, P[e 1 = e] >, then the Asynchronous Gossip for uantile computation algorithm converges almost surely to a consensus over a value verifying E. (1). x k (x, x,.., x )a.s. with x verifying E. (1). Proof: As the Asynchronous Gossip for uantile computation algorithm is obtained by randomized coordinate descent on distributed ADMM, [15, Th. 3] gives the almost sure convergence to a solution of Prob. (4); and, as for Th. 1, we get that the solutions of (4) verify E. (1) by Lemma 1. Remark : An efficient initialization of the algorithm is to set x (i, j) = a i. λ can be taken null and ρ 1. Remark 3: For different gossiping schemes such as asynchronous one-way communication (e.g. broadcast) over undirected graphs, we refer the reader to [5]. 1 3 4 6 7 (a) 8% uantile: x = 7.71 [69, 7] 1 3 4 6 7 (b) 8% uantile: x = 88 Fig. 1: Distributed uantile computation on different data V. NUMERICAL EXPERIMENTS In this section, we illustrate the features of both our distributed synchronous and asynchronous algorithms. In all experiments, we consider N = 15 agents linked by a connected undirected graph with 7 edges (out of 15 possible). The values (a i ) of the sensors are taken randomly in the integers between and. In Fig. 1, we represent all the sensors values (x k (i)) versus the number of iterations for the Distributed uantile computation with =.8 i.e. a uantile at 8% is sought. We set = /(1 ) as prescribed and ρ =.1. In Fig. 1a, the objective in the sense of E. 1 is to reach a value in [69, 7] while in Fig. 1b, we manually set a s( N ) = a s( N ) in the (a i ) so that the objective is to reach exactly 88. We observe that the agents have the sought behavior however, the convergence is slightly slower in the second case, but not seriously so. In both cases, one can notice that two goals collide in order to reach consensus over a uantile: i) the consensus itself, being close to its neighbors; and ii) minimizing the objective; the relative importance of these two sub-problems is actually tuned by free parameter ρ. In Fig., we keep everything as in Fig. 1a but, instead of taking ρ =.1, we use ρ = 3 in Fig. a and ρ =.1 in Fig. b. In Figure c, we plot for both ρ = 3 and ρ =.1 at each iteration i) the consensus error, eual to the distance between the agents value and the mean of the agents values x k 1 N i x k(i) ; ii) the objective error, eual to the difference between the mean value and the sought optimum i x k(i) x ; and iii) the total error, eual to the 1 N distance between the agents values and the sought optimum x k x 1. One can notice that the lower ρ the slower the consensus; however, when ρ get too big, the consensus is very fast but the agents move towards the sought objective more slowly. In conclusion, while any value for ρ brings convergence theoretically and practically, the convergence times can be very different. Heuristics to chose a correct parameter have appeared in the literature [6] but no distributed version exists to the best of our knowledge. In Fig. 3, we illustrate the performance of our Asynchronous Gossip for uantile computation under the same configuration as Fig. 1a. In Fig. 3a, we display the agents values over time, we notice that the convergence has a uite different shape than the synchronous algorithm. In Fig. 3b, we plot the norm of the error over the mean of the reached value; as this value is an acceptable uantile, this evaluates both the error towards a consensus and an acceptable value. We compare our distributed synchronous and asynchronous algorithms over the decrease of this errors versus the number of full uses of the communication graph, that is 1 per iteration for the synchronous algorithm, and 1/ E = 1/7.14 for the asynchronous one. Although the synchronous algorithm still over-performs its asynchronous counterpart in this setup, both reach machine precision within a few hundreds full graph uses. VI. CONCLUSION In this note, we proposed a distributed algorithm for uantile computation along with an asynchronous gossip-based one. The derivation and convergence proofs of these algorithm rely on the application of (randomized) ADMM on a well chosen distributed problem. f a APPENDIX A DERIVATION OF THE PROXIMAL OPERATOR OF f a For any γ > and any z R, the proximity operator of is defined as x = prox γf a (z) = arg min fa (w) + 1 w R γ w z. }{{} g(w)

4 3 4 (a) ρ = 3 3 4 1 5 1 5 (b) ρ =.1 ρ = 3 Consensus Error ρ = 3 Objective Error ρ = 3 Total Error ρ =.1 Consensus Error ρ =.1 Objective Error ρ =.1 Total Error 1 15 3 4 6 7 (c) Comparison Fig. : Distributed uantile computation with different ρ The derivation of this operator is similar to the one of the soft-thresholding operator as proximity operator of the l 1 - norm. Using the fact that x is the (uniue) point such that belongs to the subdifferential of g (see [3, Chap. 16, 6, 7]), we obtain f a (x) + 1/γ(x z). Now let us look at this euation for three cases: x < a, x > a, and x = a. x < a f a (x) = thus = + 1/γ(x z) so x = z + γ. This corresponds to the case where x < a, that is z + γ < a, otherwise said z < a γ. x > a f a (x) = 1 thus = 1 + 1/γ(x z) so x = z γ. Similarly, this corresponds to the case z > a + γ. x = a This final case, x = a corresponds to the values of z not covered in the previous cases. APPENDIX B MINIMIZERS OF PROBLEM (4) For any real number x, define the following uantities: B(x) := Card {a i : a i < x; i = 1,.., N} E(x) := Card {a i : a i = x; i = 1,.., N}. (6) A(x) := Card {a i : a i > x; i = 1,.., N} Thanks to the convexity of problem, Fermat s rule tells us that the minimizers of Problem (4) are the zeros of the subdifferential of f (see [3, Chap. 16, 6]): f(x) = + 1 + [, 1] i:x<a i i:x>a i i:x=a i = A(x) + B(x) + E(x)[, 1] = (N B(x) E(x)) + B(x) + E(x)[, 1] = N + ( + 1)B(x) + [, (1 + )E(x)]. Error xk x 1 1, 1,,, 3, 1 3 1 6 1 15 Number of Iterations (a) Sensor values Asynchronous Gossip Distributed Synchronous 1 3 Number of full graph uses (b) Error to consensus over admissible value Fig. 3: Asynchronous Gossip for uantile computation Now, take = 1 and let us look at the zeros. First, due to the convexity and polyhedral form of f, we get that the zeros of f are necessarily either one of the (a i ) or a segment of the form [a s(i) ; a s(i+1) ]. First, if E(x) =, = f(x) implies that B(x) = +1N = N. As B(x) is an integer, it is necessarily eual to N ; as there is exactly N entries of a below x, we get that x ]a s( N ) ; a s( N ) [ if 1/N, and x < a s(1) = min i a i which verifies the sought condition (4). Then, if E(x), define x (resp. x + ) as a real number strictly between x and the next entry of a strictly smaller (resp. greater) than x. Then, E(x ) = and f(x ) = N + ( + 1)B(x ), the same thing holds with x +. A sufficient condition so that x is the only point such that E(x) and f(x) is that f(x ) < and f(x + ). With the prescribed value for, we get that f(x ) < if and only if B(x ) < N and f(x + ) if and only if B(x + ) N. Thus, with our choice of, x is a zero of f if there are at least N values below or eual to x (from the x + part) and strictly less than N strictly below (from the x part); thus, the condition is verified. REFERENCES [1] M. H. DeGroot, Reaching a consensus, Journal of the American Statistical Association, vol. 69, no. 345, pp. 118 11, 1974.

5 [] J. N. Tsitsiklis, D. P. Bertsekas, and M. Athans, Distributed asynchronous deterministic and stochastic gradient optimization algorithms, IEEE Transactions on Automatic Control, vol. 31, no. 9, pp. 83 81, 1986. [3] A. Demers, D. Greene, C. Hauser, W. Irish, J. Larson, S. Shenker, H. Sturgis, D. Swinehart, and D. Terry, Epidemic algorithms for replicated database maintenance, in ACM Symposium on Principles of Distributed Computing (PODC), 1987, pp. 1 1. [4] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, Randomized gossip algorithms, IEEE Transactions on Information Theory, vol. 5, no. 6, pp. 8 53, 6. [5] A. D. G. Dimakis, A. D. Sarwate, and M. J. Wainwright, Geographic Gossip: Efficient Averaging for Sensor Networks, IEEE Transactions on Signal Processing, vol. 56, no. 3, pp. 15 116, 8. [6] T. C. Aysal, M. E. Yildiz, A. D. Sarwate, and A. Scaglione, Broadcast Gossip Algorithms for Consensus, IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 748 761, 9. [7] F. Iutzeler, P. Ciblat, and W. Hachem, Analysis of sum-weight-like algorithms for averaging in wireless sensor networks, IEEE Transactions on Signal Processing, vol. 61, no. 11, pp. 8 814, 13. [8] S. Wu and M. G. Rabbat, Broadcast Gossip Algorithms for Consensus on Strongly Connected Digraphs, IEEE Transactions on Signal Processing, vol. 61, no. 16, pp. 3959 3971, 13. [Online]. Available: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6517485 [9] R. Karp, C. Schindelhauer, S. Shenker, and B. Vocking, Randomized rumor spreading, in IEEE Symposium on Foundations of Computer Science (FOCS),, pp. 565 574. [1] G. Giakkoupis and T. Sauerwald, Rumor spreading and vertex expansion, in ACM-SIAM Symposium on Discrete Algorithms (SODA), 1, pp. 163 1641. [11] F. Iutzeler, P. Ciblat, and J. Jakubowicz, Analysis of max-consensus algorithms in wireless channels, IEEE Transactions on Signal Processing, vol. 6, no. 11, pp. 613 617, 1. [1] J. Cortés, Distributed algorithms for reaching consensus on general functions, Automatica, vol. 44, no. 3, pp. 76 737, 8. [13] I. Schizas, A. Ribeiro, and G. Giannakis, Consensus in Ad Hoc WSNs With Noisy Links Part I: Distributed Estimation of Deterministic Signals, IEEE Transactions on Signal Processing, vol. 56, no. 1, pp. 3 364, jan. 8. [14] E. Wei and A. Ozdaglar, Distributed Alternating Direction Method of Multipliers, in IEEE Conference on Decision and Control (CDC), 1, pp. 5445 54. [15] F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem, Asynchronous distributed optimization using a randomized alternating direction method of multipliers, in 5nd IEEE Conference on Decision and Control. IEEE, 13, pp. 3671 3676. [16] E. Wei and A. Ozdaglar, On the o (1= k) convergence of asynchronous distributed alternating direction method of multipliers, in Global Conference on Signal and Information Processing (GlobalSIP), 13 IEEE. IEEE, 13, pp. 551 554. [17] P. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM Journal on Numerical Analysis, vol. 16, no. 6, pp. 964 979, 1979. [18] J. Eckstein and D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Mathematical Programming, vol. 55, pp. 93 318, 199. [19] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1 1, 11. [] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision, vol. 4, no. 1, pp. 1 145, 11. [1] L. Condat, A primal dual splitting method for convex optimization involving lipschitzian, proximable and linear composite terms, Journal of Optimization Theory and Applications, vol. 158, no., pp. 46 479, 13. [] P. L. Combettes and J.-C. Pesuet, Stochastic uasi-fejér blockcoordinate fixed point iterations with random sweeping, SIAM Journal on Optimization, vol. 5, no., pp. 11 148, 15. [3] H. H. Bauschke and P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, 11. [4] F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem, Explicit convergence rate of a distributed alternating direction method of multipliers, IEEE Transactions on Automatic Control, vol. 61, no. 4, pp. 89 94, 13. [5] P. Bianchi, W. Hachem, and F. Iutzeler, A coordinate descent primaldual algorithm and application to distributed asynchronous optimization, IEEE Transactions on Automatic Control, vol. 61, no. 1, pp. 947 957, 16. [6] Z. Xu, M. A. Figueiredo, and T. Goldstein, Adaptive ADMM with Spectral Penalty Parameter Selection, arxiv preprint arxiv:165.746, 16.