RESISTANCE-BASED PERFORMANCE ANALYSIS OF THE CONSENSUS ALGORITHM OVER GEOMETRIC GRAPHS

Similar documents
Detailed Proof of The PerronFrobenius Theorem

Lecture Notes 1: Vector spaces

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS

Markov Chains, Random Walks on Graphs, and the Laplacian

D COMMUNICATION NETWORK DESIGN (rel.2)

Markov Chains and Stochastic Sampling

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

Linearly-solvable Markov decision problems

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

Convex Optimization CMU-10725

On Spectral Properties of the Grounded Laplacian Matrix

On the mathematical background of Google PageRank algorithm

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Lecture: Modeling graphs with electrical networks

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

A gossip-like distributed optimization algorithm for reactive power flow control

Chapter 2 Linear Transformations

Asymmetric Randomized Gossip Algorithms for Consensus

Eigenvalue comparisons in graph theory

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Lecture 3: graph theory

Lecture 2: Linear Algebra Review

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

Lecture 5: Random Walks and Markov Chain

Linear Regression and Its Applications

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Distributed Optimization over Networks Gossip-Based Algorithms

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Where is matrix multiplication locally open?

Lecture: Local Spectral Methods (1 of 4)

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

Consensus dynamics over networks

Modeling and Stability Analysis of a Communication Network System

4.1 Notation and probability review

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Boolean Inner-Product Spaces and Boolean Matrices

2 JOHAN JONASSON we address in this paper is if this pattern holds for any nvertex graph G, i.e. if the product graph G k is covered faster the larger

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Topological properties

S-adic sequences A bridge between dynamics, arithmetic, and geometry

Tangent spaces, normals and extrema

Vectors. January 13, 2013

1: Introduction to Lattices

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS. Rotchana Chieocan

Notes on Linear Algebra and Matrix Theory

5 Eigenvalues and Diagonalization

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Math 307 Learning Goals. March 23, 2010

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

Minimal enumerations of subsets of a nite set and the middle level problem

REPRESENTATION THEORY WEEK 7

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, Shaochuan Wu and Michael G. Rabbat. A. Related Work

IE 5531: Engineering Optimization I

Math 307 Learning Goals

Introduction to Real Analysis

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Infinite-Dimensional Triangularization

Stochastic Processes (Week 6)

VII Selected Topics. 28 Matrix Operations

Markov Chains, Stochastic Processes, and Matrix Decompositions

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs

MATH36001 Perron Frobenius Theory 2015

System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE

Consensus Problems on Small World Graphs: A Structural Study

On the convergence of weighted-average consensus

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio

Lebesgue Measure on R n

Effective Resistance and Schur Complements

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors

1.3 Convergence of Regular Markov Chains

Introduction to Machine Learning CMU-10701

5 Quiver Representations

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

MAS201 LINEAR MATHEMATICS FOR APPLICATIONS

1 Directional Derivatives and Differentiability

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Distributed Estimation from Relative and Absolute Measurements

Transcription:

RESISTANCE-BASED PERFORMANCE ANALYSIS OF THE CONSENSUS ALGORITHM OVER GEOMETRIC GRAPHS ENRICO LOVISARI, FEDERICA GARIN, AND SANDRO ZAMPIERI Abstract. The performance of the linear consensus algorithm is studied by using a Linear Quadratic (LQ) cost. The objective is to understand how the communication topology inuences this algorithm. This is achieved by exploiting an analogy between Markov Chains and electrical resistive networks. Indeed, this permits to uncover the relation between the LQ performance cost and the average eective resistance of a suitable electrical network and, moreover, to show that, if the communication graph fulls some local properties, then its behavior can be approximated by that of a grid, which is a graph whose associated LQ cost is well-known. Key words. Multi-agent systems, consensus algorithm, distributed averaging, large-scale graphs AMS subject classications. 68R0, 90B0, 94C5, 90B8, 05C50. Introduction. The last two decades have witnessed a great eort spent by several scientic communities in the development and in the analysis of multi-agent systems. The large number of simple intercommunicating and interacting entities can be protably used in order to model a number of rather dierent applications: just to recall some of them, load balancing [], coordinated control [2], distributed estimation [3] and distributed calibration for sensor networks [4]. A tool which has been widely proposed for solving these problems is the linear consensus algorithm, which is a powerful and exible method for obtaining averages in a distributed way with a limited communication eort. This algorithm is used when there is a set of agents, each with a scalar value, and the goal is to drive all agents to reach a common state, under the constraint that agents can only exchange information locally. To make the concepts more precise, assume that the agents are labeled by the elements of the set V = {,..., N} and that the graph G = (V, E), where E V V, describes which communication links between the agents are allowed. More precisely, we say that (u, v) E if and only if the agent v can send information to u. The graph G is called the communication graph. In the linear consensus algorithm, at each iteration, the agents send their current state to their neighbors, and then update their state as a suitable convex combination of the received messages. More precisely, if x u (t) denotes the state of the agent u V at time t N, then x u (t + ) = v V P uv x v (t), (.) where P uv are the entries of a stochastic matrix P. In a more compact form we can The research leading to these results has received funding from the European Union Seventh Framework Programme [FP7/2007-203] under grant agreements n. 257462 HYCON2 Network of excellence and n. 223866 FeedNetBack. DEI (Department of Information Engineering), Università di Padova, Via Gradenigo 6/b, 353 Padova, Italy, zampi@dei.unipd.it INRIA Grenoble Rhône-Alpes, 655 av. de l'europe, Montbonnot, 38334 Saint-Ismier cedex, France, federica.garin@inria.fr DEI (Department of Information Engineering), Università di Padova, Via Gradenigo 6/b, 353 Padova, Italy, zampi@dei.unipd.it A matrix P is said to be stochastic if P uv 0 for all u, v V and v V Puv = for all u V.

2 E. LOVISARI, F. GARIN, AND S. ZAMPIERI write x(t + ) = P x(t) (.2) where x(t) R N denotes the vector collecting all agents' states. The constraint imposed by the communication graph G is enforced by requiring that G P is a subgraph of G, where G P = (V, E P ) is the graph associated with the matrix P, dened assuming that (u, v) E P if and only if P uv 0. The stochastic matrix P is said to be irreducible if the associated graph is strongly connected, namely, for all u, v V, there exists a path in G P connecting u to v, and it is said to be aperiodic if the greatest common divisor of the lengths of all cycles in G P is one. Notice that the presence of a self-loop, namely a P uu > 0 for some u V, ensures aperiodicity. As it is well known from Frobenius-Perron theory [5], if P is irreducible and aperiodic, then P has the eigenvalue with algebraic multiplicity, and all other eigenvalues have absolute value strictly smaller than and so we have that P t t π T cazzivari where the vector π is the invariant measure of the matrix P, namely the left eigenvector of P corresponding to the eigenvalue, properly scaled so as to have u π u =. Consequently, under these hypotheses, the states of the consensus algorithm (.2) converge to the same value x u (t) t α, where α = π T x(0). The convergence to the consensus value is exponential with exponent equal to the second largest eigenvalue of the matrix P ρ(p ) = sup{ λ : λ σ(p ), λ } where σ(p ) is the spectrum of P. For this reason, the value ρ(p ) is a classical performance cost for the algorithm. Indeed, the closer is ρ(p ) to zero, the faster is the algorithm. However, as recent papers have pointed out [6, 7], this performance index on P is not the only possible choice for evaluating the performance of the algorithm. Dierent costs arise from dierent specic problems where the consensus algorithm is used. Moreover, it can be shown [8, 9] that, by considering dierent performance indices, it is possible to obtain dierent optimal graph topologies. In this paper, we propose a Linear Quadratic (LQ) cost which is a performance index widely used in the control community. To evaluate how fast P t converges to its limit value π T we propose the index J(P ) := P t π T 2 F = N N trace π t 0(I T )(P T ) t P t (I π T ). t 0 where M F := trace(mm T ) is the Frobenius norm of a matrix and where we added the normalizing factor /N for reasons which will be claried in the following. This cost appears in two dierent contexts. Assume rst that we want to evaluate the speed of convergence of the consensus algorithm by the l 2 norm of the transient, namely N [ x(t) x( ) 2 ]. t 0

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 3 Notice that this l 2 norm will depend on the initial condition x(0). For this reason, we assume that the initial condition is a random variable with zero-mean and covariance matrix E [ x(0)x(0) ] T = I. We can now consider the expected value of the l 2 norm of the transient which is now a function only of the matrix P. Indeed, by some simple computations [8] it can be shown that E x(t) x( ) 2 = J(P ). N t 0 The cost J(P ) appears also in the context of noisy consensus [6, 8, 0]. Consider a network of agents implementing the consensus algorithm, in which update is aected by additive noise, so that the actual update of the state is the following x(t + ) = P x(t) + n(t), where n(t) is a random white process. Assume that E[n(t)] = 0 and E[n(t)n(t) T ] = I for all t N. Assume that the initial condition is random and that it is uncorrelated from the noise process. We are interested in the dispersion of x(t). If we measure it by evaluating the displacement of x(t) from the weighted average i π ix i (t), namely by introducing the vector then it can be shown that e(t) = (I π T )x(t), lim E [ e(t) 2] = J(P ), t Thus, the proposed LQ cost also characterizes the spreading of the asymptotic value of the state vector around its weighted average in a noisy network. It is possible to consider the problem of determining the matrix P satisfying a constraint and minimizing the index J(P ). In this paper we will instead consider a dierent problem. Indeed we will try to provide estimates of J(P ) which permit to understand how this index depends on the structure of P and more precisely on the topological properties of the graph G P. More precisely we will be able to unveil this dependence, proving that J(P ) is related to the eective resistance of a suitable electrical network. This geometric parameter depends on the topology only, and not on the particular entries of P. Since the electrical analogy holds only if P is reversible [], in this paper we will restrict to this class of matrices. An important subclass of reversible matrices is the one of symmetric matrices, for which stronger and simpler results hold true. The results for the symmetric case were already described, without the proof details, in our paper [2]. Using these results we will show that, analogously to what happens for convergence rate [3, 4], under some assumptions, a large class of graphs, called geometric and which can be seen as perturbed grids, exhibit a particular behavior of the cost J(P ) as a function of the number of nodes which depends on the geometric dimension of the graph. In particular, if the graph has geometric dimension one, namely it is a geometric graph on a segment, then J(P ) grows linearly in the number of nodes, while, if the graph has geometric dimension two, namely it is a geometric graph on a square, then J(P ) grows logarithmically in the number of nodes. Finally, if the graph has geometric dimension three (or more), namely it is a geometric graph on a

4 E. LOVISARI, F. GARIN, AND S. ZAMPIERI cube, then J(P ) is bounded from above by a constant independent on the number of nodes. This result is based on (and extends) an analogous result [6] which holds for torus graphs. In this way we show that the spatial invariance of torus graphs is not a necessary requirement for having this kind of behavior of J(P ). The paper is organized as follows. In Section 2 we give some basic notions on reversible consensus matrices. Section 3 is devoted to recalling the analogy among reversible consensus matrices and electrical networks, which allows us to state the main results. In Section 4 we present a particularly appealing application of our results, showing that the performance cost in a family of geometric graphs only depends on the dimensionality of the graphs. The proofs of the results are postponed to Section 5 and Section 6. Finally, in Section 7 we draw conclusions... Notation. In this paper we will denote by R the set of real numbers and by R + the set of nonnegative real numbers. Vectors will be denoted by boldface letters, e.g. x, while entries of vectors and scalars will be in italic font. We denote by the column vector with all entries equal to and of suitable dimension, and by e u the u-th element of the canonical basis of R N, i.e., a vector whose u-th entry is and all other entries are 0. Given v R N, the symbol diag v where denotes a N N diagonal matrix whose (k, k)-th entry is the k-th entry of v. Given v, w R N and a positive denite matrix M, we will denote the inner product and the norm weighted by M as v, w M := v T Mw and v M := v T Mv, respectively. 2. Preliminaries. In this paper we will analyze the cost function J(P ) when P is an irreducible and aperiodic stochastic matrix, so that J(P ) is nite. In fact, aperiodicity will be the consequence of the stronger assumption which imposes that the diagonal elements of P are all positive. Observe that this condition is not restrictive for the consensus algorithm as it assumes only that in the state-update (.) each agent gives to its own current state a positive weight and this does not requires additional communication. For this reason, throughout the paper we will use the following denition. Definition 2.0.. We say that a matrix P is a consensus matrix if it stochastic and irreducible, and it satises P uu > 0 for all u. Recall [5] that a consensus matrix has a dominant eigenvalue with algebraic multiplicity. As already pointed out, the corresponding left eigenvector π, normalized so that u π u =, is called the invariant measure of P (a name coming from the interpretation of P as the transition probability matrix of a Markov chain) and it has all entries which are strictly positive. An useful object is the Laplacian of a matrix P, which is dened as L := I P R N N. It is immediate to check that L = 0 and L uv 0, u v. Moreover, since P has as eigenvalue with algebraic multiplicity one, then dim(ker L) =. One of the most important classes of consensus matrices is that of reversible matrices, a name which comes, again, from the denition of reversible Markov chains [5]. Definition 2.0.2. Let P be a consensus matrix with invariant measure π. Then P is said to be reversible if or equivalently if where Π = diag π. π u P uv = π v P vu, (u, v) V V ΠP = P T Π, (2.)

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 5 Remark 2.0.. The graph associated with a consensus matrix P in general is a directed graph. However, the assumption that the consensus matrix is reversible implies that the graph if undirected, i.e., (u, v) E (v, u) E. The proof comes immediately from Eq. (2.) and the property that π u > 0 for all u, which imply that P uv > 0 if and only if P vu > 0. For this reason, in the following we will always assume that the communication graph G is undirected. Also recall that in our denition a consensus matrix has nonzero diagonal elements, and thus we will consider only graphs having a self-loop at each node. However, consistently with previous literature of consensus, we will dene the neighborhood of a node u G to be the set of its neighbors, except u itself N u := {v V : v u, (u, v) E} and we will dene the degree of u to be the cardinality N u of its neighborhood. Notice that Eq. (2.) states that reversible matrices are self-adjoint with respect to the inner product, Π, or, equivalently, that the matrix Π /2 P Π /2 is symmetric. This implies that P has real eigenvalues and that it has N independent eigenvectors. Now we briey recall the notion of Green matrix of a consensus matrix P, which is also known as fundamental matrix in the Markov chains literature. Here we concentrate only on the results that are needed in the paper. A more complete list of the properties of the fundamental matrix can be found in [6]. Definition 2.0.3. Let P be a consensus matrix, with invariant measure π. The Green matrix G of P is dened as G := t 0(P t π T ). (2.2) The Green matrix plays a fundamental role in this paper due to its property of being almost an inverse of the Laplacian, in these sense that G + π T = (L + π T ). The above expression is easy to verify, and implies the following equation, which will be useful later on [ ] [ ] L G π T = I. (2.3) 3. Reversible consensus matrices and electrical networks. In this section we present electrical networks, their relation with consensus matrices, and the wellknown notion of eective resistance. Making use of these notions, we state our main results, Theorem 3.2. and Theorem 3.2.2, which give useful bounds of the cost J(P ) of a reversible consensus matrix P. 3.. The electrical analogy. The analogy between consensus matrices, or Markov chains, and resistive electrical networks dates back to the work of Doyle and Snell []. It is a powerful tool which gives strong intuitions on the behavior of the chain on the basis of the physics of electrical networks, as well as permitting simple and clear proofs for many results. Our interest is mainly related to the possibility to rewrite the LQ cost we are interested in in terms of a geometric parameter, the average eective resistance. To this respect we are strongly indebted in terms of inspiration to the papers by Barooah and Hespanha [7, 8, 9], from which we

6 E. LOVISARI, F. GARIN, AND S. ZAMPIERI took many results we state here without a proof. Eective resistances also arise as a performance metric for clock synchronization algorithms in [20, 2], and methods for its minimization are proposed in [22]. To conclude, in [23] the eective resistance is computed in terms of the eigenvalues of the Laplacian matrix in the symmetric case. 3... Electrical networks. A resistive electrical network is a graph in which pairs of nodes are connected by resistors. A resistive electrical network is therefore determined by a symmetric matrix C with non-negative entries which tells for each pair of nodes u, v which is the conductance of the resistor connecting those two nodes. A resistive electrical network is said to be connected if the graph G C associated with C is connected. In order to describe the current owing in the electrical network and to write Kircho's and Ohm's laws, we choose (arbitrarily) a conventional orientation for each edge of the undirected graph G, so that current will be denoted as positive when owing consistently with the direction of the edge and negative otherwise. To this aim, for any pair of nodes u, v, such that u v and C uv = C vu 0, we choose either (u, v) or (v, u) in V V. Let E V V be the set of directed edges formed in this way and let M be the number of edges. Dene the incidence matrix B R M N as follows: order the edges from to M and let for any e {,..., M} and u {,..., N} if the edge e is (v, u) for some v u, B eu = if the edge e is (u, v) for some v u, (3.) 0 otherwise. We dene the diagonal matrix C R M M having the conductance of the edge e as the entry in position (e, e). The relation between C and C is easily obtained as B T CB = diag C C, (3.2) namely C u if u = v, [B T CB] uv = C uv if (u, v) E, 0 if (u, v) / E, where C u := v V C uv. Let i R N be given such that i T = 0, and interpret the k-th entry of i as the current which is injected (or extracted if negative in sign) into the k-th node of the network from an external source. We denote by j R M and v R N, respectively, the current ows on the edges and the potentials at the nodes which are produced in the network by injecting the current i, with the convention that j e, e E, is positive when the current ows in the direction of e. The previously dened matrices B and C allow us to compactly write Kirchho's node law and Ohm's law as the system as follows { B T j = i, (3.3) CBv = j, where the rst equation states that the total current ow entering into each node equals the total ow exiting from it (Kirchho's current law), while the second equation represents Ohm's law, C uu (v u v u ) = j e for all e = (u, u ) E.

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 7 Solving the electrical network means nding the solutions j and v of Eq. (3.3), in particular nding the solution v of the following electrical equation B T CBv = i. (3.4) It is well known (see, e.g., []) that a solution exists and, for a connected network, is unique up to a constant additive term for v, i.e., the dierences v u v u are uniquely dened. In the next subsection, we will give an explicit expression for the solutions v, involving the Green matrix of the associated reversible consensus matrix. Given a connected electrical network with conductance matrix C, the eective resistance between two nodes u, u is dened to be R uu (C) := v u v u where we impose i = e u e u and v is any solution of the corresponding electrical equation (3.4)namely, v is the potential at each node of the network in the case when a unit current is injected at node u and extracted ad node u. Finally the average eective resistance of the electrical network is dened as R(C) := 2N 2 u, u V R uu (C). (3.5) Given a connected undirected graph G, in the following we will use the notations R uu (G) and R(G), as the eective resistance and the average eective resistance associated with the electrical network having conductance equal to for all the edges of G and conductance equal to 0 otherwise. 3..2. Electrical network associated with a consensus matrix. There is a way to obtain a one to one relation between reversible consensus matrices and connected resistive electrical networks with some xed total conductance (i.e., sum of the conductances of all edges). Let P be a reversible consensus matrix and let Φ(P ) := NΠP where Π = diag(π) and π is the invariant measure of P. It is clear that Φ(P ) is the conductance matrix of a connected resistive network. It can be shown that the map Φ is injective. Indeed, if P, P 2 are reversible consensus matrices and if Φ(P ) = Φ(P 2 ), then diag(π )P = diag(π 2 )P 2. Multiplying on the right both members by we obtain that π = π 2 and consequently P = P 2. We show now that Im(Φ) = S, where S := {C R N N + : C = C T, C uu > 0 u V, G C is connected, and T C = N} Clearly Im(Φ) S. To prove the equality, for any given C S we consider P = (diag(c)) C (3.6) and we show that P is a reversible consensus matrix such that Φ(P ) = C. It is straightforward to see that P =, P uu > 0, u V, and that G P is connected. To prove that P is reversible and that Φ(P ) = P, the key remark is that π = N C

8 E. LOVISARI, F. GARIN, AND S. ZAMPIERI is the invariant measure of P, and thus Π = N diag(c). This immediately implies that Φ(P ) = C. The reversibility of P is then proved by using the symmetry of C: ΠP = N C = N CT = P T Π. In this way, we have proved not only that Φ is bijective on S, but also that (3.6) provides the inverse of Φ over S. Consider now a reversible consensus matrix P and its associated conductance matrix C := Φ(P ). Let moreover B and C be the matrices associated with the resistive electrical network with conductance C, as dened above. Notice that the Laplacian matrix L := I P = N Π B T CB and so the electrical equation 3.4 is equivalent to Lv = N Π i. (3.7) The network being connected, ker L = {α : α R}, and thus, for any i such that i T = 0, Eq. (3.7) has innitely many solutions, of the form v + α for some real constant α, where v is a particular solution. In our setting it is convenient to nd v which satises the following constraint π T v = 0, which means that we need to solve the equation [ ] [ L ] π T v = N Π i 0 Thanks to Eq. (2.3), the solution can be explicitly written by using the Green matrix G associated with P, as follows v = [ G ] [ ] N Π i = 0 N GΠ i. Consequently, we can obtain the eective resistance as follows. R uu (C) = N (e u e u ) T GΠ (e u e u ). (3.8) 3.2. LQ cost and eective resistance. This section is devoted to our main results on the relation between the LQ cost J(P ) for a reversible consensus matrix P and the average eective resistance of a suitable electrical network. The results are then formulated in the special case of symmetric consensus matrices, since for this case they turns out to be clearer and more readable. The proofs are given in Section 5. Consider a reversible consensus matrix P and let π be its invariant measure. Build the electrical network associated with the matrix P 2 as suggested in Sec. 3..2, namely build the matrix of conductances C := Φ(P 2 ) = NΠP 2. (3.9) In the particular case in which P is symmetric we have that C = P 2. The following theorem allows us to estimate the cost in terms of the average eective resistance of this electrical network, and of quantities depending on the elements of the invariant measure of P. Theorem 3.2.. Let P R N N be a reversible consensus matrix and π its invariant measure, and C the matrix of conductances dened in Eq. (3.9). Then it holds π 3 min N 2 π max R(C) J(P ) π 3 maxn 2 π min R(C),

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 9 where π min and π max are respectively the minimum and maximum entries of π. In the particular case of symmetric matrix we have the following corollary which is a straightforward consequence of the previous theorem. Corollary 3.. Let P R N N be a symmetric consensus matrix, and C the matrix of conductances dened in Eq. (3.9). Then it holds J(P ) = R(C). The previous results catch the dependance of the cost on the electrical network built from P 2. The following theorem allows us to write the cost J(P ) in terms of the eective resistance of the graph associated with P only, regardless of the particular entries of the matrix. Theorem 3.2.2. Let P be a reversible consensus matrix with invariant measure π and let G be the graph associated with P. Assume that all the non-zero entries of P belong to the interval [p min, p max ], and that the degree of any node is bounded from above by an integer δ. Then, π 3 min N 8p 2 maxδ 2 π 2 max R(G) J(P ) π3 maxn p 2 min π2 min R(G). A simpler result can be obtained for symmetric matrices as a straightforward consequence of the previous theorem. Corollary 3.2. Let P be a symmetric consensus matrix associated with a graph G. Assume that all the non-zero entries of P belong to the interval [p min, p max ], and that the degree of any node is bounded from above by an integer δ. Then, 8p 2 maxδ R(G) 2 J(P ) R(G) p 2. min These last two results can be used to estimate the proposed LQ-cost in terms of the eective resistance of graphs only, as we will show in Section 4. 3.3. LQ cost and network dimension. One of the most important problems in the design of a sensor network is to dimension it, namely to decide how many sensors we need to deploy for obtaining a given performance. From this point of view, it is very important to understand how our cost function scales in terms of the number N of nodes in a sequence of graphs of growing size, belonging to a given family. The results in the previous sections can be used to achieve this goal. Consider in fact a sequence of graphs {G N } N, and assume that f(n) = R(G N ) is a known function of N. Assume that the degree of any node of each G N is uniformly bounded from above by a positive integer δ. At rst, assume to build a sequence of symmetric matrices P N, each one consistent with the corresponding G N, and such that if [P N ] ij 0, then p min [P N ] ij p max, for all N. Then we immediately obtain by Corollary 3.2 that the asymptotic scaling of J(P N ) for N is given by f(n), up to some multiplicative constant. Notice that the above-mentioned assumptions on the family P N are satised, for example, by one of the most popular consensus protocols on networks with bidirectional communication (i.e., on undirected graphs), dened as follows x u (t + ) = x u (t) + ε (x v (t) x u (t)) v N u

0 E. LOVISARI, F. GARIN, AND S. ZAMPIERI for a suciently small ε > 0. If we relax the assumption that all P N 's are symmetric, and we consider a family of reversible matrices P N, each one consistent with the corresponding G N, the uniform bound from above and from below on the non-zero entries [P N ] ij is not enough to ensure that the asymptotic behavior of J(P N ) is given by f(n). In fact, once we denote by π N the invariant measure of P N, and by π N, min and π N, max respectively the minimum and maximum value of the entries of π N, we need the further assumption that the sequences Nπ N, min and Nπ N, max are uniformly bounded from above and below by constants independent of N. Under this assumption, Theorem 3.2.2 clearly ensures that the asymptotic behavior of J(P N ) is given by f(n). Although the assumption requiring that Nπ N, min and Nπ N, max are uniformly bounded can be rather dicult to check for the general reversible consensus matrices, we can easily see that it holds true in the following important example of consensus iteration (known as `simple random walk' or `uniform weights') x u (t + ) = (x u (t) + ) x v (t). N u + v N u Indeed, it can be shown that, if the graph is undirected, then the consensus matrix in this case is reversible, and that the invariant measure π has components N u + π u = v V ( N v + ) Under the assumption that the degree of any node is bounded by a value δ, it is clear that the entries of the consensus matrix belong to the interval [ δ+, 2 ]. One can also 2 check that each entry of the invariant measure lies in the interval [ (δ+)n, δ+ 2N ], and hence the assumptions are satised. We conclude this remark noticing that the assumption requiring that Nπ N, min and Nπ N, max are uniformly bounded is not implied by the other assumptions that P N are reversible consensus matrices whose entries belong to a xed interval [p min, p max ] and are consistent with graphs with degrees uniformly bounded by δ. This is proved in the example showed in gure 3.. One can check that the corresponding consensus matrix is reversible and that the invariant measure π is such that π k = α(a/b) k, where α is a suitable normalizing factor. If we assume that a > b, then π N, min = π = α and π N, max = π N = α(a/b) N. In this case Nπ N, min and Nπ N, max cannot be uniformly bounded from below and above, because if this were the case, then also the ratio Nπ N, max /Nπ N, min would be uniformly bounded from below and above, but this is not possible, as Nπ N, max /Nπ N, min = π N, max /π N, min = (a/b) N. a a a b a b a b a a a 2...... N b b b b b Figure 3.. Example: a family of growing lines. We assume that a > 0, b > 0 and a + b <. 4. Application to geometric graphs. Let G = (V, E) be a connected undirected graph such that V Q where Q = [0, l] d R d and V = N. Namely, the

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS nodes of the graph are deployed in some d-dimensional hypercube of side length equal to l. Given such a graph and two nodes u, v V, we denote by d E (u, v) the Euclidean distance between u and v in R d, namely d E (u, v) = d (u k v k ) 2. k= Following [8], we dene following parameters associated with G: the minimum Euclidean distance between any two nodes s = min {d E(u, v)} ; (4.) u, v V, u v the maximum Euclidean distance between any two connected nodes r = max {d E(u, v)} ; (4.2) (u, v) E the radius of the largest ball centered in Q not containing any node of the graph γ = max {r B(x, r) V =, x Q}, (4.3) where B(x, r) denotes the d-dimensional ball centered in x R d and with radius r; the minimum ratio between the Euclidean distance of two nodes and their graphical distance 2 { } de (u, v) ρ = min (u, v) V V. (4.4) d G (u, v) One important example of geometric graph is given by the regular grid with dimension d. This is dened as the geometric graph with N = n d nodes, lying in a hypercube of edge length l = n, whose nodes have coordinates (i,..., i d ), where i,..., i d {0,,..., n }, and in which two nodes u, v are connected if and only if d E (u, v) =. 4.. Geometric graphs: main result. The following theorem is the main result of this section. Theorem 4... Let P R N N be a reversible consensus matrix with invariant measure π, associated with a graph G = (V, E). Assume that all the non-zero entries of P belong to the interval [p min, p max ] and that G is a geometric graph with parameters (s, r, γ, ρ) and nodes lying in Q = [0, l] d in which γ < l/4. Then k + q f d (N) J(P ) k 2 + q 2 f d (N), where N if d =, f d (N) = log N if d = 2, if d 3, (4.5) 2 The graphical distance d G (u, v) between u and v is dened as the length (i.e., number of edges) of the minimum path connecting u and v.

2 E. LOVISARI, F. GARIN, AND S. ZAMPIERI and where k, k 2, q and q 2 depend on p max, p min, δ, d, on π min N and π max N and on the parameters s, r, γ, ρ of the geometric graph. One of the most important consequences of this result is the fact that a d- dimensional regular grid has the same behavior of the LQ cost as a function of N of a irregular geometric graph. This implies that the behavior of the LQ cost as a function of N is essentially captured by the dimensionality, rather than by the symmetry, exactly as it happens for the rate of convergence towards consensus [3, 4]. 4.2. Growing families of geometric graphs. Theorem 4.. holds for each given geometric graph with parameters s, r, γ and ρ. However, very much similarly to what is done in Section 4., we want to use this result in order to capture the asymptotic behavior in term of the dimension of the network. Consider a growing family of geometric graphs G N with G N = (V N, E N ) and V N = N. Let each G N be a geometric graph with parameters s N, r N, γ N and ρ N. Assume that there exist parameters s, r, γ and ρ, which we call the geometric parameters of the family, such that s N s, r N r, γ N γ, ρ N ρ, N. (4.6) Let P N be the reversible consensus matrix associated with G N and with invariant measure π N. First of all, we need that all the non-zero entries of P N belong to the interval [p min, p max ], for all N. Moreover, called π N,min an π N,max respectively the minimum and maximum entries of π N, we need to assume that there exist two constants c l and c u such that Nπ N,min c l and Nπ N,max c u. Notice now that by denition of s N and r N, and thus s N = min {d E(u, v)} min {d E(u, v)} r N u, v V, u v (u, v) E s s N r N r; given a graph G N, if ū and v are two nodes connected by an edge, we have { } de (u, v) ρ N = min (u, v) V V d E (ū, v) r N d G (u, v) and thus ρ ρ N r; by denition of γ N, it is immediate to see that we must have 2γ N s N, and thus s 2 γ N γ. Hence, given the geometric parameters of the family, we have a lower and an upper bound for the actual parameters in any graph. Notice moreover that the degree of each node is bounded by the ratio of the volume of the sphere of radius r N and the volume of the sphere of radius s N. Therefore the maximum degree δ N of the nodes of G N is uniformly bounded as follows δ N rd N s d N rd s d.

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 3 On the other side, clearly for all N we have δ N, because G N is connected. Therefore, similarly to Section 3.3, this discussion allows us to conclude that for the family of reversible consensus matrices P N constructed as above, it holds k + q f d (N) J(P N ) k 2 + q 2 f d (N) where k, q, k 2 and q 2 depend on d, p max, p min, c l and c u and the geometric parameters of the family of graphs s, r, γ, ρ. 5. The relation between the LQ cost and eective resistance: Proofs of Theorem 3.2. and Theorem 3.2.2. This section is devoted to the proof of the theorems relating the LQ cost with the average eective resistance. We recall some useful facts from the literature, and then use these notions to prove the results. 5.. Electrical networks: properties of the eective resistances. This section is devoted to briey recall without proofs some well-known results on the behavior of the eective resistances in case of perturbation of the electrical network. These are of fundamental importance, since eective resistances show monotonicity properties which are not trivial to prove for consensus matrices without the electrical analogy. A rst important property is the following (see e.g. [24, Thm. B] for a proof). Lemma 5.. If the electrical network is connected, then the eective resistance is a distance. Namely, it satises the following properties: R uv 0 for all u, v V, and R uv = 0 if and only if u = v; R uv = R vu ; R uw R uv + R vw for all u, v, w V. A second result, known as Rayleigh's monotonicity law, says that increasing (resp., decreasing) the conductance in any edge of the network implies that the eective resistance between any other couple of nodes respectively cannot increase (resp., decrease). The statement is essentially taken from [8, 25], where the authors were considering a more general case. Lemma 5.2 (Rayleigh's monotonicity law). Let C and C be the conductance matrices of two electrical networks such that C uu C uu, (u, u ) V V. Then, the eective resistances between any two nodes v, v in the network are such that R vv (C) R vv (C ). The following lemma [25, Lemma 4.6.] says that, if we take two resistive networks with the conductance matrices scaled by a constant α, then the eective resistances will be scaled by the constant /α. Lemma 5.3. R uu (αc) = α R uu (C), (u, u ) V V. Remark 5... Lemma 5.3 and Rayleigh's monotonicity law imply that the eective resistance in an electrical network is essentially due to the graph topology. In fact,

4 E. LOVISARI, F. GARIN, AND S. ZAMPIERI if we have an electrical network with conductance matrix C whose non-zero entries belong to the interval [c min, c max ] and if C is a conductance matrix having entries equal to in the positions in which C has non-zero entries and to 0 elsewhere, then R uu (C ) R uu (C) R uu (C ), (u, u ) V V. c max c min The last technical lemma deals with h-fuzzing in electrical networks with unitary conductances. Given an integer h and a graph G, we call h-fuzz of G, denoted by the symbol G (h) = (V (h), E (h) ), a graph with the same set of nodes, V (h) = V, and with an edge connecting two nodes u and v if and only if the graphical distance d G (u, v) between u and v in G is at most h, namely E (h) = {(u, v) V V : d G (u, v) h}. Notice that, if h =, then G () = G. If D is the diameter of the graph, namely the maximum graphical distance between a couple of nodes, then G (D) is the complete graph. It is easy to see that, if P is a stochastic matrix with positive diagonal entries, then the graph G P h associated with a P h is the h-fuzz of the graph G P associated with P. The lemma, which is stated with proof in [25, Lemma 5.5.], suggests that the eective resistance of G and of its h-fuzz G (h) have eective resistances with a similar asymptotic behavior. Lemma 5.4. Let h Z, h, and let G = (V, E) be a graph and G (h) = (V, E (h) ) be its h-fuzz. For any edge e E, dene µ h (e) to be the number of paths of length at most h passing through e in G (without any self-loop in the path), and dene µ h = max e E µ h (e). The following bounds hold true hµ h R uv (G) R uv (G (h) ) R uv (G). The value of µ h in the previous result clearly depends on the particular graph under consideration. The following lemma gives a conservative bound for µ h which depends only on the maximum degree of the nodes in the graph. We will use this bound later, in the particular case of h = 2. Lemma 5.5. Let µ h be dened as in Lemma 5.4. If in G all nodes have degree at most δ, then µ h h 2 (δ ) h. Proof. For any K =,..., h, we want to nd an upper bound on the number of paths of length K passing through the edge e. We let K be an integer 0 K K, and we consider the number of paths in which edge e is the (K + )-th edge in the path, namely there are K edges before e and K K edges after e. As it can be easily seen in Figure 5., there are at most (δ ) K choices for portion of path preceding e, and at most (δ ) K K choices for the portion following e, so that there are at most (δ ) K paths having e in (K + )-th position. Summing upon all K = 0,..., K, and then summing also upon all path lengths K =,..., h, we obtain that, for any e E, µ h (e) h K K= K =0 (δ ) K = h K(δ ) K K= h h(δ ) h = h 2 (δ ) h. K=

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 5 at most δ edges e (δ ) K (δ ) K K Figure 5.. Illustration of the proof of Lemma 5.5: upper bound on the number of paths of length K in which e is the (K + )-th edge, in a graph with node degree at most δ = 4. Finally notice that this upper bound for µ h (e) is the same for all edges e E, and thus it is also an upper bound for the maximum, µ e. 5.2. Proof of Theorem 3.2. and Theorem 3.2.2. The previous denitions and properties are used in this section to prove the main results Theorem 3.2. and Theorem 3.2.2 for reversible consensus matrices. Then Corollary 3. and Corollary 3.2 are immediate for symmetric matrices π i = N for all i =,..., N. In order to prove the results, we need to introduce two more technical objects which will help us to develop the theory. Consider a reversible consensus matrix P with invariant measure π. We call weighted cost the following function of P J w (P ) := trace π t 0(I T )(P T ) t ΠP t (I π T ) (5.) where Π = diag(π). Notice that in the case of symmetric matrices J(P ) = J w (P ). Now, let C := Φ(P 2 ) = NΠP 2. The second object we need is the weighted average eective resistance, which is dened as R w (C) := 2 πt R(C)π = 2 (u, v) V V R uv (C)π u π v. (5.2) Again, notice that in the symmetric case the weighted denition coincides with the un-weighted one, namely R(C) = R w (C). We present now a lemma which clarie the relation between the costs J(P ) and J w (P ), and between the weighted and the un-weighted average eective resistances, respectively. The proof is immediate from the fact that π u > 0 for all u. Lemma 5.6. Let P be a consensus matrix with invariant measure π and let C := Φ(P 2 ) = NΠP 2. Then J w (P ) J(P ) J w (P ) Nπ min Nπ max π 2 minn 2 R(C) Rw (C) π 2 maxn 2 R(C).

6 E. LOVISARI, F. GARIN, AND S. ZAMPIERI After the inequalities of the above lemma, which concern separately the LQ cost and the average eective resistance, our goal is to nd the relation between the cost of the consensus matrix P and the average eective resistance of the connected electrical network associated with P 2. Before doing so, we need the following technical lemma. Lemma 5.7. If P is a consensus matrix, then the diagonal entries of its Green matrix G are positive. Proof. For ease of notation, we prove that G > 0; the proof for the other diagonal entries of G can be obtained by the same arguments. We x the following notation: we let g T = [G, g T ] be the rst row of G, and we dene the following partitions [ l r T L = L c ] [ p r, P = P c T ], g T = [ G, g T ], π T = [ π, π T ]. Because GL = I P, we have g T L = e T π T, where e denotes the rst vector of the canonical basis of R N. Notice that G = 0 implies that g T = 0 and thus, in particular, G = g T N. Similarly, π T L = 0 T gives l = π π T c. Hence, we can write the equality g T L = e T π T in the following equivalent way g T [ ] [ N I π N π T ] L [ ] N I N = e T π T. I N By right-multiplying both sides of the above equality with a factor obtain that g T N = π T L N, and thus [ 0 T N L ] N, we G = π T L N. Now notice that, by the denition L = I P of the Laplacian, L = I P. Moreover, our denition of consensus matrix implies that P is primitive, and thus it is wellknown that P has all eigenvalues inside the unit circle (see e.g. [26, Lemma III.] for a proof). This implies that L is invertible, and that the series t 0 P t is convergent and is equal to (I P ) = L. This allows to obtain G = π T t 0 P t N. Recalling that the entries of π are all positive, and that P has non-negative entries with at least some positive element, this proves that G > 0. Now we have the tools to prove the following lemma, which shows the relation between the weighted cost and the weighted average eective resistance. Lemma 5.8. Let P be a reversible consensus matrix with invariant measure π and let C := Φ(P 2 ) = NΠP 2. Then π min N R w (C) J w (P ) π max N R w (C). Proof. To prove this lemma, we will prove the following two equalities involving the Green matrix associated with P 2, which we will denote by G(P 2 ). J w (P ) = trace ΠG(P 2 );

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 7 2. Rw (C) = N trace G(P 2 ). From such equalities, the statement follows, because Π is diagonal and positive denite, and G(P 2 ) has positive diagonal entries (see Lemma 5.7). As far as the rst equality is concerned, observe that J w (P ) = trace t 0(P t π T ) T Π(P t π T ) = trace ( (P t ) T ΠP t ππ T ) t 0 = trace Π ( P 2t π T ) = trace ( ΠG(P 2 ) ). t 0 As far as the second equality is concerned, observe that, by substituting the expression for R uv (C) given in Eq. (3.8) inside the denition of Rw (C), we get R w (C) = 2 u,v u,v N (e u e v ) T G(P 2 )Π (e u e v )π u π v, from which we can compute R w (C) = 2 N (e u e v ) T G(P 2 )Π (e u e v )π u π v = (e T u e T v )G(P 2 )(π v e u π u e v ) N 2 u,v = ( (π v e T u G(P 2 )e u + π u e T v G(P 2 )e v ) N 2 u,v ) (π v e T v G(P 2 )e u + π u e T u G(P 2 )e v ) 2 = N u,v ( trace(g(p 2 )) π T G(P 2 ) ), which yields the proof of the equality since π T G(P 2 ) = 0. These lemmas can be easily used to infer the rst main result, since Theorem 3.2.'s proof follows immediately from the inequalities in Lemma 5.6 together with those in Lemma 5.8. In order to prove the second main result, we need a last technical lemma, which allows us to reduce the computation of the average eective resistances on the 2-fuzz of G to those on G only. Lemma 5.9. Let P be a reversible consensus matrix with invariant measure π and with associated graph G. Let C := Φ(P 2 ). Then R(G) 8Nπ max δ 2 p 2 R(C) R(G) max Nπ min p 2, (5.3) min where δ denotes the largest degree of the graph nodes in G and p min and p max are, respectively, the minimum and the maximum of the non-zero elements of P.

8 E. LOVISARI, F. GARIN, AND S. ZAMPIERI Proof. First of all notice that, for all u, v such that C uv 0 we have that C uv = Nπ u [P 2 ] uv = Nπ u w P uwp wv. By denition of p min and p max, and because there are at most δ + non-zero terms P uw for any xed u, this yields C uv 0, Nπ min p 2 min C uv Nπ max (δ + )p 2 max. (5.4) By Remark 5.., c max R(G (2) ) R(C) c min R(G (2) ), where c min and c max are the minimum and maximum non-zero entries of C, respectively. This, together with Eq. (5.4), gives R(G (2) Nπ max (δ + )p 2 ) R(C) R(G (2) max Nπ min p 2 ). min Then we apply Lemmas 5.4 and 5.5, both with h = 2, and we obtain R(G) 8(δ )Nπ max (δ + )p 2 R(C) R(G) max Nπ min p 2, min which yields the claim, because (δ + )(δ ) δ 2. Now we can prove the second main result: Theorem 3.2.2 immediately follows from Theorem 3.2. and Lemma 5.9. 6. Application to geometric graphs: Proof of Theorem 4... In order to prove Theorem 4.., we need two preliminary results. The rst one is an immediate corollary of a theorem taken from [6], which states that the claimed asymptotic behavior of geometric graphs holds true at least in the case of regular grids. Recall that, by regular grid, we mean a d-dimensional geometric graph with N = n d nodes lying on the points (i,..., i d ), where i,..., i d {0,..., n } and in which there is an edge connecting two nodes u, v if and only if their distance is d E (u, v). Lemma 6.. Let B L be the incidence matrix of a regular grid in dimension d with N = n d nodes. Let P R N N be the consensus matrix dened as follows P = I 2d + BT L B L whose associated graph is the regular grid. Then c l f d (N) R(L) c u f d (N) where c l and c u depend on d only, and where f d (N) is dened in Eq. (4.5). Proof. With the same assumptions, from [6], Proposition, we known that c lf d (N) J(P ) c uf d (N), where c l and c u depend on δ only. The result immediately follows from Corollary 3.2. The second result allows us to reduce the problem of computing the average eective resistance in the geometric graph to the simpler case of two suitable grids. First of all, we state the following three technical results. Lemma 6.2. In an hypercube H Q with side length less than s d, there is at most one node u V. In an hypercube H Q with side length greater than 2γ, there is at least one node u V.

CONSENSUS PERFORMANCE OVER GEOMETRIC GRAPHS 9 s Proof. If the side length of an hypercube is d, then its diagonal has length s. If we had two nodes in the hypercube, their distance would be less than s, in contradiction with the denition of s. The second claim is proved noticing that an hypercube of side length 2γ includes a sphere of radius γ. If it did not contain any node, then we could nd a sphere of radius larger than γ not containing any node, in contradiction with the denition of γ. As a corollary of the previous lemma we have the following result. Lemma 6.3. Let H be a hypercube in Q with edge length l H and let N H be the number of nodes in it. Then lh < d dlh N H <. 2γ s Proof. The result follows from Lemma 6.2 simply counting how many disjoint hypercubes of side length s d and 2γ we can nd in an hypercube of side length l H. In particular for the whole graph we have the following corollary. Corollary 6.4. The number of nodes N of the graph is such that l 2γ l < d d s l N < l. 2γ s Notice that, in the case where l is big with respect to γ and s, the previous corollary essentially implies that N is proportional to l d. The following lemma concerns geometric graphs and their embeddings in lattices. Lemma 6.5. Let G = (V, E) be a geometric graph with parameters (s, r, γ, ρ) and with nodes in an hypercube Q = [0, l] d in which γ < l 4. Then there exist two lattices, L and L 2 such that k + q R(L ) R(G) k 2 + q 2 R(L2 ), (6.) where q, q 2, k and k 2 depend on s, r, γ, ρ, and on d. Moreover, there exist four constants, c, c, c 2, and c 2, depending on the same set of parameters, such that, if N and N 2 are respectively the number of nodes of L and L 2, then c N N c N c 2N 2 N c 2N 2. (6.2) Proof. The idea is to tessellate the hypercube Q in order to obtain a rough approximation of G, and then compute the bound for the eective resistance. Let us consider the upper bound rst. Dene n := l 2γ and λ := l n and (exactly) tessellate the hypercube Q with N := n d hypercubes of side length λ as in Fig. 6.. Notice that the technical assumption γ < l 4 also implies γ < l 2, which in turn avoids the pathological case in which n = 0. Using the properties of, it can be seen that 2γ < λ < 4γ. Notice that the assumption l > 4γ ensures that N 2 d. Notice that, by Lemma 6.2, in each of these hypercubes there is at least one node of the graph G. On the other hand, by Lemma 6.3, we can argue that in each hypercube there are at most dλ s d nodes. Since 2γ < λ, then N N N. 2 dγ s d

20 E. LOVISARI, F. GARIN, AND S. ZAMPIERI This proves the rst of the two bounds in Eq. 6.2. Another consequence of the fact that in each of these hypercubes there is at least one node of G is that for each hypercube we can select one representative node in V belonging to it. Let V L V be the set of these representatives. Consider now the regular lattice L = (V L, E L ) having as the set of nodes the set of representatives V L and in which there exists an edge connecting two nodes in V L if the two corresponding hypercubes touch each other (not diagonally). Dene the function η : V V L such that η(u) = u if u belongs to the hypercube associated with u. γ s r ρ Figure 6.. On the left, an example of geometric graph in R 2 with parameters s, r, γ and ρ (for ρ, the two nodes for which the minimum in the denition is attained). On the right, the lattice L built for the upper bound. The box-marked nodes are the representatives of the hypercubes, in thick solid line the edges of the lattice L. Small nodes and dotted lines are the other nodes and edge of the original graph G. The next step is to prove that there exists an integer h such that the h-fuzz G (h) of G embeds L, namely that all the nodes and edges of L are also nodes and edges of G (h). Take thus u, v V L such that (u, v ) E L. Their Euclidean distance is bounded as follows d E (u, v ) λ d + 3 as a simple geometric argument shows. By denition of ρ, we obtain d G (u, v ) λ d + 3 ρ 4γ d + 3. ρ Take thus h = 4γ d+3 ρ and build G (h). By the previous discussion, it is manifest that G (h) embeds L. Now, we claim that in G (h) all the nodes lying in the same hypercube have graphical distance, namely they are all connected each other. In fact, if u and v lie in one hypercube, and thus d E (u, v) λ d, then we have d G (v, u) ρ d E(v, u) λ d ρ and thus d G (v, u) h. This clearly yields 4γ d ρ 4γ d + 3, ρ d G (h)(u, η(u)). (6.3)