Convergence Rate Analysis for Periodic Gossip Algorithms in Wireless Sensor Networks

Size: px
Start display at page:

Download "Convergence Rate Analysis for Periodic Gossip Algorithms in Wireless Sensor Networks"

Transcription

1 Convergence Rate Analysis for Periodic Gossip Algorithms in Wireless Sensor Networks S Kouachi Sateeshkrishna Dhuli Y N Singh Senior Member IEEE arxiv:805030v csdc 7 May 08 Abstract Periodic gossip algorithms have generated a lot of interest due to their ability to compute the global statistics by using local pair wise communications among nodes Simple execution robustness to topology changes distributed nature make these algorithms quite suitable to wireless sensor networks WSN) However these algorithms converge to the global statistics after certain rounds of pair-wise communications among sensor nodes A major challenge for periodic gossip algorithms is difficulty to predict the convergence rate for large scale networks To facilitate the convergence rate evaluation we study a onedimensional lattice network model In this scenario in order to derive the explicit formula for convergence rate we have to obtain a closed form expression for second largest eigenvalue of perturbed pentadiagonal matrices In our approach we derive the explicit expressions of eigenvalues by exploiting the theory of recurrent sequences Unlike the existing approaches in literature this is a direct method which avoids theory of orthogonal polynomials 8 Finally we derive the explicit expressions for convergence rates of average periodic gossip algorithm in one-dimensional WSNs Further we extend our analysis by considering the linear weight updating approach investigate the impact of link weights on the convergence rates for different number of nodes Index Terms Wireless Sensor Networks Pentadiagonal Matrices Eigenvalues Lattice Networks Convergence Rate Gossip Algorithms Distributed Algorithms Periodic Gossip Algorithms Perturbed Pentadiagonal Matrices I INTRODUCTION GOSSIP is a distributed operation which enables the sensor nodes to asymptotically determine the average of their initial gossip variables Gossip algorithms ) have generated a lot of attention in the last decade due to their ability to achieve the global average using pairwise communications between nodes In contrast to centralied algorithms the underlying distributed philosophy of these algorithms avoids the need of central fusion for information gathering Especially they are quite suitable for data delivery in WSNs 5 6) as they can be utilied when global network topology is highly dynamic network consists of power constrained nodes As the gossip algorithms are iterative in nature convergence rate of the algorithms greatly influences the performance of the S Kouachi is with the Department of Mathematics College of Science Qassim University P O Box 66 Al-Gassim Buraydah 55 Saudi Arabia koashy@quedusa Sateeshkrishna Dhuli is with the Department of Electrical Engineering Indian Institute of TechnologyKanpur 0806 Uttar pradesh India dvskrishnanitw@gmailcom YNSingh is with the Department of Electrical Engineering Indian Institute of TechnologyKanpur 0806 Uttar pradesh India ynsingh@iitkacin WSNs Although there have been several studies on gossip algorithms analytic tools to control the convergence rate have not been much explored in the literature In a gossip algorithm each node communicates information with one of the neighbors to obtain the global average at every node Gossip algorithms have been shown to have faster convergence rates with the use of periodic gossip sequences such algorithms are termed as periodic gossip algorithms 7 8) Convergence rate of a periodic gossip algorithm is characteried by the magnitude of second largest eigenvalue of a gossip matrix However computing second largest eigenvalue requires huge computational resources for large scale networks In our work we estimate the convergence rate of the periodic gossip algorithms for one-dimensional Lattice network Lattice networks represent the notion of geographical proximity in the practical WSNs they have been extensively used in the WSN applications for measuring monitoring purposes 5 Lattice networks ) are amenable to closed form solutions which can be generalied to higher dimensions These structures also play a fundamental role to analye the connectivity scalability network sie node failures in WSNs In this paper we model the WSN as a one-dimensional lattice network obtain the explicit formulas of convergence rate for periodic gossip algorithms by considering both even odd number of nodes To obtain the convergence rate we need to derive the explicit expressions of second largest eigenvalue for perturbed pentadiagonal stochastic matrix For properties of these matrices we refer to 8 9) In 0 author considered a constant-diagonals matrix gave many examples on the determinant inverse of the matrix for some special cases To the best of our knowledge explicit expressions of eigenvalues for pentadiagonal matrices are not yet available in the literature In our work we derive the explicit expressions of eigenvalues for perturbed pentadiagonal matrices Close form expressions of eigenvalues are extremely helpful as pentadiagonal have been widely used in the applications of time series analysis signal processing boundary value problems of partial differential equations high order harmonic filtering theory differential equations 6 7) In order to determine the eigenvalues we obtain the recurrent relations followed by the application of theory of recurrent sequences Unlike the existing approaches in the literature our approach avoids the theory of orthogonal polynomials Furthermore we use the explicit eigenvalue expressions of perturbed pentadiagonal matrices to derive the convergence rate expressions of

2 periodic gossip algorithm in one-dimensional WSNs Specifically our work avoids the usage of computationally expensive algorithms for studying the large scale networks Our results are more precise they can be applied to most of the practical WSNs A Our Main Contributions )Firstly we model the WSN as a one-dimensional lattice network compute the gossip matrices of average periodic gossip algorithm for both even odd number of nodes )To obtain the convergence rate of average periodic gossip algorithm we need to determine the second largest eigenvalue of perturbed pentadiagonal matrices By exploiting the theory of recurrent sequences we derive the explicit expressions of eigenvalues for perturbed pentadiagonal matrices 3)We extend our results to periodic gossip algorithms with linear weight updating approach to obtain the generalied expression for convergence rate )Finally we present the numerical results study the effect of number of nodes link weight on convergence rate B Organiation In summary the paper is organied as follows In Section II we give brief review of the periodic gossip algorithm In Section III we evaluate the primitive gossip matrices of periodic gossip algorithm for lattice networks In Section IV we derive the explicit eigenvalues of several perturbed pentadiagonal matrices using recurrent sequences We derive the analytic expressions for convergence rate of periodic gossip algorithm for both even odd number of nodes in Section V Finally in Section VI we present the numerical results study the effect of link weight number of nodes on convergence rate II BRIEF REVIEW OF PERIODIC GOSSIP ALGORITHM Gossiping is a form of consensus to evaluate the global average of the initial values of the gossip variables The gossiping process can be modeled as a discrete time linear system as xt+) = Mt)xt) t = ) x is a vector of node variables Mt) denotes a doubly stochastic matrix If nodes i j gossip at time t then the values of nodes at time t + ) will be updated as x i t + ) = x j t + ) = x it) + x j t) ) Mt) is expressed as Mt)=P ij P ij =P lm n n for each step i j) with entries defined as l m) i i) i j) j i) j j) p lm = l = m l i l j; 0 otherwise 3) A gossip sequence is defined as an ordered sequence of edges for a given graph in which each pair appears once For a gossip sequence i j ) i j ) i k j k ) the gossip Fig One-Dimensional Lattice Network matrix is expressed as P ik j k P ij P ij For a periodic gossip sequence with period T if i t j t denotes t th gossip pair then i T +k = i k for k = Here we can write variable x at k + ) as xk+)t) = W xkt) k = 0 n ) W is a doubly stochastic matrix T also denotes the number of steps needed to implement it s one period subsequence E When a subset of edges are such that no two edges are adjacent to the same node the gossips on these edges can be performed simultaneously in one time step is defined as multigossip Minimum value of T is related to an edge coloring problem The minimum number of colors needed in an edge coloring problem is called as chromatic index The value of the chromatic index is either d max or d max + d max is the maximum degree of a graph When multi-gossip is allowed a periodic gossip sequence E E E with T =chromatic index is called an optimal periodic gossip sequence Convergence rate of the periodic gossip algorithm is characteried by the second largest eigenvalue λ W )) Convergence rate R) at which gossip variable converges to a rank one matrix is determined by the spectral gap λ W )) 8 III PERIODIC GOSSIP ALGORITHM FOR A ONE-DIMENSIONAL LATTICE NETWORK We model the WSN as a one-dimensional Lattice network as shown in the Figure We obtain the optimal periodic subsequence evaluate the primitive gossip matrix A Average Gossip Algorithm In this algorithm each pair of nodes at each iteration participate in the gossip process to update with the average of their previous state values to obtain the global average In this section we study the average gossip algorithm for onedimensional lattice network for both even odd number of nodes ) For n=even: The possible pairs of one-dimensional lattice network can be expressed as { ) 3)3 )n n ) n n)} In this case the chromatic index is either or 3 Hence optimal periodic sub-sequence E) can be written as E = E E E = { 3) 5)n n )} E = { )3 )5 6)n n)} are two disjoint sets Primitive gossip matrix W ) is expressed as W = S S or W = S S

3 3 S = P 3 P 5 P n )n ) S = P P 3 P 56 P n )n Hence gossip matrix W ) for n=even can be computed as W = ) For n=odd: In this case optimal periodic sub-sequence E) is expressed as E = E E 5) E ={ 3) 5)n n )} E ={ )3 )5 6)n n)} are two disjoint sets Gossip matrix W ) for n=odd is defined as W = S S or W = S S S = P P 3 P n )n ) S = P 3 P 5 P 67 P n )n Hence gossip matrix W ) for n=odd can be computed as W = B Linear Weight Updating Approach 0 In the previous section we obtain the primitive gossip matrices for link weight w= To investigate the effect of link weight on convergence rate we consider the special case by considering the weights associated with the edges If we assume that at iteration k nodes i j communicate then node i node j performs the linear update with link weight w as 3 x i k) = w)x i k ) + wx j k ) 6) x j k) = wx i k ) + w)x j k ) 7) w is the link weight associated with edge i j) We follow the similar steps as in the previous section to compute the gossip matrix For n=even gossip matrix W ) is expressed as w w + w w 0 0 w w ) 0 W = 0 w + w w ) w w w ) w + w 0 0 w ) w 0 0 w w + w w 8) Similarly for n=odd gossip matrix W ) can be computed as w ww ) w 0 0 w w ) 0 W = 0 ww ) w ) w w w ) w + w 0 0 w ) ww ) w w) 9) IV EXPLICIT EIGENVALUES OF PERTURBED PENTADIAGONAL MATRICES USING RECURRENT SEQUENCES In this section we derive the eigenvalues of the following non-symmetric perturbed pentadiagonal matrices e α b c d e b 0 0 b e b 0 c b e b A m+ α β e bd c bd) = b e b c 0 b e b b e b c 0 0 c b e b 0 0 d e β 0) e α b c 0 0 d e b 0 0 b e b 0 A m α β e bd c bd) = c b e b b e b c 0 b e b 0 0 b e d 0 0 c b e β ) We apply the well known Gaussian elimination method to obtain the characteristic polynomials of the matrices 0 ) as orthogonal polynomials of second kind solutions of recurrent sequence relations Remark : We observe that the presence of e on the main diagonal is redundant In fact it can be considered as 0

4 A Caseα=β = 0 ) Step : In this step we study the case when d = b If we denote by n the characteristic polynomial of the matrix A n then the Laplace expansion by minors along the last column provides m+ = Y m+ b + m+ ) Y = e λ Here m+ is the determinant of the matrix obtained from A m+ λi m+ by deleting its last row column + m+ is the determinant of the matrix obtained from A m+ λi m+ by replacing the elements c b with b Y respectively in the last row The Laplace expansion by minors of m+ + m+ each one along the last row provides m+ = Y m b m 3) + m+ = b m c m ) m is the determinant of the matrix obtained from A m+ λi m+ by deleting its last row last column m is the determinant of the matrix obtained from A m λi m by replacing its last column b Y by c b respectively Concerning the determinant m the Laplace expansion by minors along the last column provides m = b m c + m 5) Combining the above formulas we obtain n+ Y + c b ) n + cy b ) n = 0 n = 3 For convenience we assume that ) m = m+ ) m = m Consequently we have the following two recurrent sequences j) m+ Y + c b ) j) m + cy b ) j) m = 0 j = m = Here characteristic polynomial is given by Substituting α Y + c b ) α + cy b ) = 0 Y + c b ) cy b ) = cy b ) sin θ which is equivalent to the following algebraic equation with roots Y cy cos θ + c b cos θ) = 0 6) Here we denote as α = cy b ) e ±iθ = cy b ) Finally we can write the general solution of the above recurrent sequences as j) m = m C e imθ + C e imθ) j = m = with the initial conditions when j = j) = Y j) = Y 3 Y b + cb j) = Y b j) = Y 3Y b + Y b c + b b c when j = C C are real constants to be calculated Our main result in this section is as follows Theorem : The characteristic polynomial j) m ) m m Y sin m + ) θ c sin mθ = 7) ) m = m sin m + ) θ + b c ) sin mθ 8) Proof : When j = using initial conditions calculating the constants C C we obtain ) m Y sin m ) θ + Y 3 Y b + cb ) sin mθ m = E = E = Y Y b ) sin mθ sin m ) θ b Y c) sin mθ From 6) we have Y b = cos θ + b c We use the following trigonometric identity cos θ sin mθ sin m ) θ = sin m + ) θ 9) Hence from 9) we get E = Y sin m + ) θ + b c ) Y b Y c) sin mθ = Y sin m + ) θ c sin mθ Finally we get 7) For the j = case we follow the similar steps as in the first case obtain ) m m = sin m ) θ + sin m ) θ F = F = Y b ) Y b ) sin m ) θ sin m ) θ b Y c) sin m ) θ is

5 5 Using 6) 9) we obtain F = Y b ) sin mθ + b c ) Y b ) b Y c) sin m ) θ = Y b ) sin mθ sin m ) θ = Y b ) sin mθ sin m ) θ Using 6) we have F = cos θ + b c ) sin mθ sin m ) θ The trigonometric identity 9) gives F = sin m + ) θ + b c ) sin mθ = sin m + ) θ + b c ) sin mθ Finally we get 8) ) Step : In this step we derive the characteristic polynomial of the matrix obtained from A n by taking in the upper corner d = b which is denoted as j) n bb bd) j = Here the calculations differ only from the first step We start developing the characteristic polynomial of A n along the last column The relationships that follow are the same as the previous step Finally we get b Y c) j) m+ bb bd) = R j) m bb bb) S j) m bb bb) 0) R = by cd) + Y c) by cd) + d Y c) S = by cd) Using 6) we have { R = by cd) cos θ + d b) Y S = by cd) The trigonometric identity 9) gives ) ) j) m+ bb bd) = K j) m+ + L j) m 3) K = by cd b Y c) L = d b) Y b Y c) ) Theorem : The characteristic polynomial of the matrices A m+ bb bd) A m bb bd) are ) m m+ bb bd) = ) m m+ bb bd) = {Y sin m + ) θ c + d b) b Y c) sin m + ) θ} 5) respectively Proof : Substituting the 8) 3) in 7) results in ) B) m+ bb bd) = m+ m+ sinm+)θ+b) m+ sinm+)θ+b) m sin mθ by c) ) B) m+ bb bd) = m Using m+ sinm+)θ+b) m+ sinm+)θ+b) m sin mθ by c) B ) m+ = by cd) Y B m ) = d b) cy B ) m+ = by cd) c + d b) Y B ) m+ = by cd) B ) m = d b) Y b c ) B ) m+ = by cd) b c ) + d b) Y sin mθ + sin m + ) θ = cos θ sin m + ) θ the coefficients of ) m+ bb bd) become B ) m+ = b Y c) Y B ) m = 0 B ) m+ = by cd) c + d b) Y d b) cy cos θ Applying 6) to the coefficient B ) m+ becomes B ) m+ = b Y c) c d b) cos θ ) b Using 6) we get B ) m+ Y c) = b Y c) c d b) b Substituting in the expression of ) m+ bb bd) simplifying the term b Y c) we get 5) We use 6) for B ) m+ results in B ) m+ = by cd) b c ) + d b) cy d b) Y b = b Y c) d b) b + c + d b) c cos θ Using 9) we get B ) m+ = b Y c) B ) m = d b) b Y c) B ) m+ = b Y c) d b) b + c Finally we can simplify as 6) Remark : In the subsequent sections we use the following expression instead of 5 ) ) m m+ bb bd) = b { } b Y + c) cd sin m + ) θ + d b) Y + c) bc sin m + ) θ c d b) sin mθ d b) c Y c) sin m + ) θ 7) which gives 5) by eliminating the term c d b) sin mθ using trigonometric formulas 3) Step 3: In this step we are interested in the calculus of the characteristic polynomial of the matrices when b d in the { } sin m + ) θ d b) b + c sin m + ) θ two corners which will be denoted by j) n bd bd) j = + d b) b sin mθ Applying the same reasoning as previous steps we obtain 6) j) m+ bd bd) = K j) m+ bb bd)+l j) m bb bd) j = 8) K L are given by )

6 6 Theorem 3: The characteristic polynomial of the matrices A m+ bd bd) are ) m+ bd bd) = m { Y + c ) sinm + )θ c + by c) c + cy cos θ sinm + )θ } sinθ { ) m+ bd bd) sin m + ) θ + 3b bd c ) sin m + ) θ m = d b) d 3b) sin mθ + d b) sin m ) θ + d b) c Y c) sin mθ } 9) 30) Proof 3: When n is odd we substitute the ) m bb bd) in 8) to obtain ) m ) D) m+ bd bd) = m+ D ) m+ = by cd) Y D ) bb bd) m+ sinm+)θ+d) m+ sinm+)θ+d) m sin mθ by c) m+ = c by cd) + d b) b by cd) Y c) d b) Y D ) m = d b) Y c + d b) b Y c) Assume d b = c then D ) m+ D) m = b Y c) Y + c = b Y c) Y + c Using trigonometric formulas we obtain D ) m sin m + ) θ + sin mθ = D ) m sin m + ) θ cos θ then ) m+ We have bd bd) = m+ D ) m+ + D) m cos θ = Using 6) D ) m+ + D) m cos θ = Using again 6) D ) m+ D) m + D ) sin m + ) θ cos θ sinm + )θ m+ + D) m by c) b Y c) c Y + cy cos θ c +bc Y c) b Y c) c + cy cos θ by c) b Y c) + b cos θ ) c +bc Y c) b Y c) c + cy cos θ by c) D ) m+ + D) m cos θ = c { + b Y c) c + cy cos θ } This gives 9) For n even we substitute in 8) the quantities ) m bb bd) ) m bb bd) by theirs expressions given by 6) we get ) m+ bd bd) = m D ) m+ sin m + ) θ + D) m+ sinm + )θ +D m ) sin mθ + D ) sin m ) θ m by c) In order to simplify the above expression we use Ψ ) Y ) = b Y c) ) m+ bdbd) m+ = D ) m+ sin m + ) θ + D) m+ +D ) m sin mθ + D ) m sinm + )θ sin m ) θ We use 0) to simplify Ψ ) Y ) = { } R sin m + ) θ d b) b + c sin mθ m + d b) b sin m ) θ S m { sin mθ d b) b + c sin m ) θ + d b) b sin m ) θ Using the expressions of R S given by ) applying the trigonometric identity 9) we get Ψ ) Y ) = by cd) { sin m + ) θ d b) b + c sin m + ) θ + d b) b sin mθ } + d b) Y { sin m + ) θ d b) b + c sin mθ + d b) b sin m ) θ } Applying 9) we get Ψ ) c) = cos θ ) d b) c { b c ) sin m + ) θ + d b) b sin mθ } But from 6) we have then cos θ ) = Y c) Ψ ) Y ) =: Ψ ) Y ) Ψ ) c) + Ψ ) c) bc Y + c) b 3 c d sin m + ) θ = Y c) d b) b + c b sin m + ) θ + d b) b sin mθ c Y + c) b sin m + ) θ + d b) Y c) d b) b + c sin mθ { + d b) b sin m ) θ b + d b) c c ) } sin m + ) θ Y c) + d b) b sin mθ That is b ) m+ b d b) c sin m + ) θ = d b) b + c b sin m + ) θ + d b) b sin mθ + c sin m + ) θ + d b) d b) b + c sin mθ { + d b) b sin m ) θ b + d b) c c ) } sin m + ) θ+ d b) b sin m + ) θ m+ bdbd) This ends the proof of the Theorem 3 Y c) }

7 7 B General case The calculus when α β are nondescriptive become very complicated the expressions of the characteristic polynomials are very long In this section we deal the case of α = β = b Theorem : The eigenvalues of the matrices are the couples λ ik = e Y ik i = k = 0 m { Y0 = b + c) Y ik cy ik b ) cos k+)π m+ b c ) = 0 3) k = 0 m when n is odd { Y0 = c Y 0 = b + c) Y ik cy ik b ) cos kπ m ) b c ) = 0 3) k = m when n is even Proof : When n is odd α 0 β 0 then exping the determinant of the matrix A m+ α β Y bd c bd) λi m+ in terms of the first last columns using the linear property of the determinants with regard to its columns results in ) m+ α β bd bd) = ) m+ bd bd) α ) m+ bb bd) bd bb) + αβ ) m bb bb) β ) m+ Taking α = β = b substituting the expressions of ) m+ bd bd) ) m+ bb bd) ) m bb bb) given by 9) 6) 8) respectively we get ) E) m+ α β bd bd) = m Using then E ) m+ = Y + c ) + b m+ sinm+)θ+e) m+ sinm+)θ+e) m sin mθ E ) m+ = c + bc Y c) c 3 + c Y cos θ +b bc + c b ) b Y E ) m = b c b c sin mθ + sin m + ) θ = b c sin m + ) θ cos θ ) m+ α β bd bd) = P sin m + ) θ Q sin m + ) θ Simplifying the above expression further results in P = Q = cy + c + bc b ) Y b b + c) = cy b ) Y + b + c) = Y + b + c) Trigonometric formulas give sin m + ) θ sin m + ) θ = sin ) θ cos m + 3) θ Finally we obtain which can be written ) ) m+ α β bd bd) = Y + b + c) m+ sin θ ) cosm+3) θ m α β bd bd) = Y + b + c) m cosm+) θ cos θ This gives 3) For n even we obtain ) m+ α β bd bd) = ) m+ bd bd) α ) m+ bb bd) β ) m+ bd bb) + αβ ) m+ bb bb) ) m+ ) m+ bd bd) ) m+ bb bd) =: ) m+ bd bb) bb bb) are given by 30) 7) 8) respectively Replacing in the above formula yields b ) m+ αβbdbd) + C ) m+ + C) m+ + C3) m+ = b sin m + 3) θ + C m ) m+ sin m + ) sin m + ) θ + C m sin mθ { b 3b C m+ = bd c ) } α + β) b Y + c) cd + αβb Cm+ = b d b) d 3b) + d b) bc Y c) Cm+ = α + β) d b) Y + c) bc C m+ 3 = + α + β) d b) c Y c) + αβb b c ) C m = d b) {b d b) + c α + β)} Using formula 6) we get C ) d b) m+ = α + β) bc c cos θ + c b ) Y + c b c ) Applying the trigonometric identity 9) the constants become C m+ = b { 3b bd c ) α + β) Y + αβ } C ) m+ = b d b) d 3b) + d b) bc Y c) C ) m+ + C3) m+ = b { α + β) d b) Y c) + αβ b c )} C m = b d b) Applying 6) to coefficient of sin m + ) θ using the trigonometric identity cos θ sin m + ) θ sin m + ) θ = sin m + 3) θ we get C m+3 = b { α + β) bc 3b C m+ = bd c + αβ ) α + β) c b c ) + α + β) b Y C ) m+ = b d b) d 3b) + d b) bc Y c) C ) m+ + C3) m+ = α + β) d b) b Y c) + α + β) bc + αβb b c ) C m = b d b) } We eliminate the term C m by applying the trigonometric identity 9) to get C m+3 = b { α + β) bc ) } C m+ = b 3b bd c d b) + αβ α + β) c b c ) + α + β) b 3 Y C ) m+ = b d b) d 3b d b) cos θ) + d b) bc Y c) C ) m+ + C3) m+ = α + β) d b) b Y c) + α + β) bc + αβb b c ) C m = 0

8 8 Applying 6) to the term C ) m+ results C ) m+ = b d b) d 3b) + d b) by d b) b b c ) + d b) bc Y c) Now the question is: under what conditions the coefficients of sin m + 3) θ sin m + ) θ are equal Indeed we obtain this if we suppose that d b = c α = β = b which coincides with our matrix e = w ) d = w b = ww ) c = w In this case we have ) m+ αβbdbd) m = D sin m + 3) θ + D sin m + ) θ + D sin m + ) θ { D = + bc D = b bc c ) + bc b c ) b 3 Y The application of the trigonometric identity sinm + )θ + sin m + 3) θ = cos θ sin m + ) θ the formula 6) together give the more simplified for the characteristic polynomial of the matrices A m α β e bd c bd) ) m α β bd bd) = m Y + b) b + c) ) sin mθ This gives 3) ends the proof of the Theorem V EXPLICIT FORMULAS FOR CONVERGENCE RATE In this section we compute the explicit expressions of convergence rate for one-dimensional lattice networks We study the one-dimensional lattice networks for n = odd n = even cases A For n=odd Comparing the expressions of 8) 0) we observe c = w d = w b = w + w e α = e β = w e = w ) α = β = w ) + w = w w = b d b = w = c Since e = w ) Y = e λ then λ 0 = The other eigenvalues are the roots of the algebraic equation λ w ) w cos k+)π n λ + w ) = 0 k = 0 n 3 which can be written λ w + + w sin k+)π n λ + w ) = 0 k = 0 n 3 The characteristic of the above equation is = w sin k+)π n w sin k+)π n w + This gives the expressions of the eigenvalues λ k = w + + w sin k+)π n ±w sin k+)π n w sin k+)π n w + The largest eigenvalue is obtained when sin k+)π n = which means k = n ie λ 0 = Consequently the second largest eigenvalue is obtained for k = n 3 That is = w + + w sin n )π n +w λ + n 3 sin n )π n w sin n )π n w + Therefore convergence rate for n=even is expressed as λ + n 3 = w + w sin n )π n +w sin n )π n w sin n )π n w + Convergence rate of average periodic gossip algorithm w = ) is expressed as λ+ n 3 = sin n )π n B For n=even By comparing the 9) ) we observe c = w α = β = w w d = w e = w ) b = w +w When b = w + w c = w we get Y ik w Y ik cos kπ Y 0 = w Y 0 = w w n w + w ) ) cos kπ n +w = 0 k = m Since e = w ) Y = e λ then λ 0 = w ) w = w + λ 0 = w ) w w ) = the other eigenvalues are the roots of the algebraic equation λ w ) w cos kπ n λ + w ) = 0 k = n which can be written λ w + + w sin kπ n λ + w ) = 0 The characteristic of the above equation is = w sin kπ n w sin kπ n w + This gives the expressions of the eigenvalues λ k = w + + w sin kπ n ±w sin kπ n w sin kπ n w + k = n The largest eigenvalue is obtained when sin kπ n = which means k = n ie λ 0 = Consequently we obtain the second largest eigenvalue for k = n That is = w + + w sin n )π n +w sin n )π λ + n n w sin n )π n w +

9 9 TABLE I CONVERGENCE RATE OF SMALL SCALE WSNS Number of Nodes Convergence Rate Optimal Link Weight Fig Convergence Rate versus Number of nodes for Average Gossip Algorithm w=05) TABLE II CONVERGENCE RATE OF LARGE SCALE WSNS Number of Nodes Convergence Rate Optimal Link Weight Therefore convergence rate for n=odd is expressed as = w + w sin n )π n +w sin n )π λ + n n w sin n )π n w + Convergence rate of average periodic gossip algorithms w = ) is expressed as λ+ n = sin n )π n VI NUMERICAL RESULTS In this section we present the numerical results Fig shows the convergence rate versus number of nodes in one-dimensional lattice networks for average periodic gossip algorithmsw=05) We have observed that convergence rate reduces exponentially with the increase in number of nodes In every time step nodes share information with their direct neighbors to achieve the global average Thus for larger number of nodes more time steps will be required thereby leading to slower convergence rates As shown in Table I optimal link weights are varying with the number of nodes until n=6 it s value becomes 09 from n 6 Fig 3 shows the convergence rate versus link weights for large scale networks We have observed that for large scale lattice networks the optimal link weight turns out to be 09 see Table ) Hence we can conclude that for any reasonable sied network n 6) link weight should be 09 for achieving faster convergence rates in one dimensional lattice networks We measure the efficiency of periodic gossip algorithms at Fig 3 Convergence Rate versus Link Weight for Small Scale WSNs w=09 over average periodic gossip algorithmsw=05) by using relative error E = R09 R05 R 09 R 09 R 05 denote the convergence rate for w=09 w=05 respectively Fig 5 shows the relative error versus number of nodes for small scale networks Here we observed that relative error is increasing with the number of nodes Fig 6 shows the relative error versus number of nodes for large scale networks In this case we observed that relative error is approximately constant 089) with any further increase in number of nodes Fig Convergence Rate versus Link Weight for Large Scale WSNs

10 0 Fig 5 Relative Error versus Number of Nodes for Small Scale WSNs Fig 6 Relative Error versus Number of Nodes for Large Scale WSNs VII CONCLUSIONS Estimating the convergence rate of a periodic gossip algorithm is a computationally challenging in large scale networks This paper derived the explicit formulas of convergence rate for one-dimensional lattice networks Our work drastically reduces the computational complexity to estimate the convergence rate for large scale WSNs We also derived the explicit expressions for convergence rate in terms of link weight number of nodes using linear weight updating approach Based on our findings we have observed that there exists an optimum link weight which significantly improves the convergence rate for periodic gossip algorithms in small scale WSNs n < 6) Our numerical results demonstrate that periodic gossip algorithms achieves faster convergence rate for large scale networks n 6) at w=09 over average periodic gossip algorithms Furthermore our formulation also helped to derive the eigenvalues of perturbed pentadiagonal matrices To the best of our knowledge this is the first paper to derive the explicit formulas of eigenvalues for pentadiagonal matrices J Liu S Mou A S Morse B D Anderson C Yu Deterministic gossiping Proceedings of the IEEE vol 99 no 9 pp A Falsone K Margellos S Garatti M Prini Finite time distributed averaging over gossip-constrained ring networks IEEE Transactions on Control of Network Systems 07 S Mou A S Morse B D Anderson Convergence rate of optimal periodic gossiping on ring graphs in Decision Control CDC) 05 IEEE 5th Annual Conference on IEEE 05 pp B D Anderson C Yu A S Morse Convergence of periodic gossiping algorithms in Perspectives in mathematical system theory control signal processing Springer 00 pp F He A S Morse J Liu S Mou Periodic gossiping IFAC Proceedings Volumes vol no pp E Zanaj M Baldi F Chiaraluce Efficiency of the gossip algorithm for wireless sensor networks in Software Telecommunications Computer Networks 007 SoftCOM 007 5th International Conference on IEEE 007 pp 5 8 A G Dimakis S Kar J M Moura M G Rabbat A Scaglione Gossip algorithms for distributed signal processing Proceedings of the IEEE vol 98 no pp G Barrenetxea B Berefull-Loano M Vetterli Lattice networks: capacity limits optimal routing queueing behavior IEEE/ACM Transactions on Networking TON) vol no 3 pp A El-Hoiydi J-D Decotignie Wisemac: An ultra low power mac protocol for multi-hop wireless sensor networks in International symposium on algorithms experiments for sensor systems wireless networks distributed robotics Springer 00 pp 8 3 P G Spirakis Algorithmic foundational aspects of sensor systems in International Symposium on Algorithms Experiments for Sensor Systems Wireless Networks Distributed Robotics Springer 00 pp 3 8 J Li C Blake D S De Couto H I Lee R Morris Capacity of ad hoc wireless networks in Proceedings of the 7th annual international conference on Mobile computing networking ACM 00 pp R Hekmat P Van Mieghem Interference in wireless multi-hop ad-hoc networks its effect on network capacity Wireless Networks vol 0 no pp O Dousse F Baccelli P Thiran Impact of interferences on connectivity in ad hoc networks IEEE/ACM Transactions on Networking TON) vol 3 no pp G J Pottie W J Kaiser Wireless integrated network sensors Communications of the ACM vol 3 no 5 pp R P Agarwal Difference equations inequalities: theory methods applications Marcel Dekker 99 7 S S Capiano Generalied locally toeplit sequences: spectral analysis applications to discretied partial differential equations Linear Algebra its Applications vol 366 pp A D A Hadj M Elouafi On the characteristic polynomial eigenvectors determinant of a pentadiagonal matrix Applied Mathematics Computation vol 98 no pp D K Salkuyeh Comments on a note on a three-term recurrence for a tridiagonal matrix Applied mathematics computation vol 76 no pp E Kilic On a constant-diagonals matrix Applied Mathematics Computation vol 0 no pp O Mangoubi S Mou J Liu A S Morse On periodic gossiping with an optimal weight ACKNOWLEDGMENT The author S Kouachi thanks the KSA superior education ministry for the financial support REFERENCES S Boyd A Ghosh B Prabhakar D Shah Gossip algorithms: Design analysis applications in INFOCOM 005 th Annual Joint Conference of the IEEE Computer Communications Societies Proceedings IEEE vol 3 IEEE 005 pp

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS. Michael G. Rabbat

ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS. Michael G. Rabbat ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS Michael G. Rabbat Dept. of Electrical and Computer Engineering McGill University, Montréal, Québec Email: michael.rabbat@mcgill.ca ABSTRACT This paper

More information

Asymptotics, asynchrony, and asymmetry in distributed consensus

Asymptotics, asynchrony, and asymmetry in distributed consensus DANCES Seminar 1 / Asymptotics, asynchrony, and asymmetry in distributed consensus Anand D. Information Theory and Applications Center University of California, San Diego 9 March 011 Joint work with Alex

More information

Quantized Average Consensus on Gossip Digraphs

Quantized Average Consensus on Gossip Digraphs Quantized Average Consensus on Gossip Digraphs Hideaki Ishii Tokyo Institute of Technology Joint work with Kai Cai Workshop on Uncertain Dynamical Systems Udine, Italy August 25th, 2011 Multi-Agent Consensus

More information

Consensus Problems on Small World Graphs: A Structural Study

Consensus Problems on Small World Graphs: A Structural Study Consensus Problems on Small World Graphs: A Structural Study Pedram Hovareshti and John S. Baras 1 Department of Electrical and Computer Engineering and the Institute for Systems Research, University of

More information

Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks

Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks Ercan Yildiz 1, Anna Scaglione 2, Asuman Ozdaglar 3 Cornell University, UC Davis, MIT Abstract In this paper we consider the

More information

Discrete-time Consensus Filters on Directed Switching Graphs

Discrete-time Consensus Filters on Directed Switching Graphs 214 11th IEEE International Conference on Control & Automation (ICCA) June 18-2, 214. Taichung, Taiwan Discrete-time Consensus Filters on Directed Switching Graphs Shuai Li and Yi Guo Abstract We consider

More information

On Agreement Problems with Gossip Algorithms in absence of common reference frames

On Agreement Problems with Gossip Algorithms in absence of common reference frames On Agreement Problems with Gossip Algorithms in absence of common reference frames M. Franceschelli A. Gasparri mauro.franceschelli@diee.unica.it gasparri@dia.uniroma3.it Dipartimento di Ingegneria Elettrica

More information

Fastest Distributed Consensus Averaging Problem on. Chain of Rhombus Networks

Fastest Distributed Consensus Averaging Problem on. Chain of Rhombus Networks Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks Saber Jafarizadeh Department of Electrical Engineering Sharif University of Technology, Azadi Ave, Tehran, Iran Email: jafarizadeh@ee.sharif.edu

More information

Consensus-Based Distributed Optimization with Malicious Nodes

Consensus-Based Distributed Optimization with Malicious Nodes Consensus-Based Distributed Optimization with Malicious Nodes Shreyas Sundaram Bahman Gharesifard Abstract We investigate the vulnerabilities of consensusbased distributed optimization protocols to nodes

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Distributed Averaging in Sensor Networks Based on Broadcast Gossip Algorithms

Distributed Averaging in Sensor Networks Based on Broadcast Gossip Algorithms 1 Distributed Averaging in Sensor Networks Based on Broadcast Gossip Algorithms Mauro Franceschelli, Alessandro Giua and Carla Seatzu. Abstract In this paper we propose a new decentralized algorithm to

More information

The spectral decomposition of near-toeplitz tridiagonal matrices

The spectral decomposition of near-toeplitz tridiagonal matrices Issue 4, Volume 7, 2013 115 The spectral decomposition of near-toeplitz tridiagonal matrices Nuo Shen, Zhaolin Jiang and Juan Li Abstract Some properties of near-toeplitz tridiagonal matrices with specific

More information

Eigenvalues and eigenvectors of tridiagonal matrices

Eigenvalues and eigenvectors of tridiagonal matrices Electronic Journal of Linear Algebra Volume 15 Volume 15 006) Article 8 006 Eigenvalues and eigenvectors of tridiagonal matrices Said Kouachi kouachisaid@caramailcom Follow this and additional works at:

More information

TOPOLOGY FOR GLOBAL AVERAGE CONSENSUS. Soummya Kar and José M. F. Moura

TOPOLOGY FOR GLOBAL AVERAGE CONSENSUS. Soummya Kar and José M. F. Moura TOPOLOGY FOR GLOBAL AVERAGE CONSENSUS Soummya Kar and José M. F. Moura Department of Electrical and Computer Engineering Carnegie Mellon University, Pittsburgh, PA 15213 USA (e-mail:{moura}@ece.cmu.edu)

More information

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Michael Rabbat Joint with: Syamantak Datta Gupta and Mark Coates (McGill), and Stephane Blouin (DRDC)

More information

Learning distributions and hypothesis testing via social learning

Learning distributions and hypothesis testing via social learning UMich EECS 2015 1 / 48 Learning distributions and hypothesis testing via social learning Anand D. Department of Electrical and Computer Engineering, The State University of New Jersey September 29, 2015

More information

On the Quality of Wireless Network Connectivity

On the Quality of Wireless Network Connectivity Globecom 2012 - Ad Hoc and Sensor Networking Symposium On the Quality of Wireless Network Connectivity Soura Dasgupta Department of Electrical and Computer Engineering The University of Iowa Guoqiang Mao

More information

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective ABSTRACT Shreyas Sundaram Coordinated Science Laboratory and Dept of Electrical and Computer Engineering

More information

Decentralized Consensus Optimization with Asynchrony and Delay

Decentralized Consensus Optimization with Asynchrony and Delay Decentralized Consensus Optimization with Asynchrony and Delay Tianyu Wu, Kun Yuan 2, Qing Ling 3, Wotao Yin, and Ali H. Sayed 2 Department of Mathematics, 2 Department of Electrical Engineering, University

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Fast Linear Iterations for Distributed Averaging 1

Fast Linear Iterations for Distributed Averaging 1 Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider

More information

Network-based consensus averaging with general noisy channels

Network-based consensus averaging with general noisy channels Network-based consensus averaging with general noisy channels 1 Ram Rajagopal 1, Martin J. Wainwright 1,2 1 Department of Electrical Engineering and Computer Science 2 Department of Statistics University

More information

Randomized Gossip Algorithms

Randomized Gossip Algorithms 2508 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 6, JUNE 2006 Randomized Gossip Algorithms Stephen Boyd, Fellow, IEEE, Arpita Ghosh, Student Member, IEEE, Balaji Prabhakar, Member, IEEE, and Devavrat

More information

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Themistoklis Charalambous, Member, IEEE, Michael G. Rabbat, Member, IEEE, Mikael Johansson, Member,

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

Optimal matching in wireless sensor networks

Optimal matching in wireless sensor networks Optimal matching in wireless sensor networks A. Roumy, D. Gesbert INRIA-IRISA, Rennes, France. Institute Eurecom, Sophia Antipolis, France. Abstract We investigate the design of a wireless sensor network

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

EUSIPCO

EUSIPCO EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr

More information

arxiv: v1 [cs.sc] 17 Apr 2013

arxiv: v1 [cs.sc] 17 Apr 2013 EFFICIENT CALCULATION OF DETERMINANTS OF SYMBOLIC MATRICES WITH MANY VARIABLES TANYA KHOVANOVA 1 AND ZIV SCULLY 2 arxiv:1304.4691v1 [cs.sc] 17 Apr 2013 Abstract. Efficient matrix determinant calculations

More information

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time Submitted, 009 American Control Conference (ACC) http://www.cds.caltech.edu/~murray/papers/008q_lm09b-acc.html On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time Javad Lavaei

More information

Research Article The Adjacency Matrix of One Type of Directed Graph and the Jacobsthal Numbers and Their Determinantal Representation

Research Article The Adjacency Matrix of One Type of Directed Graph and the Jacobsthal Numbers and Their Determinantal Representation Applied Mathematics Volume 20, Article ID 423163, 14 pages doi:101155/20/423163 Research Article The Adjacency Matrix of One Type of Directed Graph and the Jacobsthal Numbers and Their Determinantal Representation

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

distributed approaches For Proportional and max-min fairness in random access ad-hoc networks

distributed approaches For Proportional and max-min fairness in random access ad-hoc networks distributed approaches For Proportional and max-min fairness in random access ad-hoc networks Xin Wang, Koushik Kar Rensselaer Polytechnic Institute OUTline Introduction Motivation and System model Proportional

More information

Asynchronous Non-Convex Optimization For Separable Problem

Asynchronous Non-Convex Optimization For Separable Problem Asynchronous Non-Convex Optimization For Separable Problem Sandeep Kumar and Ketan Rajawat Dept. of Electrical Engineering, IIT Kanpur Uttar Pradesh, India Distributed Optimization A general multi-agent

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Information in Aloha Networks

Information in Aloha Networks Achieving Proportional Fairness using Local Information in Aloha Networks Koushik Kar, Saswati Sarkar, Leandros Tassiulas Abstract We address the problem of attaining proportionally fair rates using Aloha

More information

Asynchronous Distributed Averaging on. communication networks.

Asynchronous Distributed Averaging on. communication networks. 1 Asynchronous Distributed Averaging on Communication Networks Mortada Mehyar, Demetri Spanos, John Pongsajapan, Steven H. Low, Richard M. Murray California Institute of Technology {morr, demetri, pongsaja,

More information

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY Jrl Syst Sci & Complexity (2009) 22: 722 731 MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY Yiguang HONG Xiaoli WANG Received: 11 May 2009 / Revised: 16 June 2009 c 2009

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Convergence Rate for Consensus with Delays

Convergence Rate for Consensus with Delays Convergence Rate for Consensus with Delays Angelia Nedić and Asuman Ozdaglar October 8, 2007 Abstract We study the problem of reaching a consensus in the values of a distributed system of agents with time-varying

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, Shaochuan Wu and Michael G. Rabbat. A. Related Work

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, Shaochuan Wu and Michael G. Rabbat. A. Related Work IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, 2013 3959 Broadcast Gossip Algorithms for Consensus on Strongly Connected Digraphs Shaochuan Wu and Michael G. Rabbat Abstract We study

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors Ali Jadbabaie Department of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 jadbabai@seas.upenn.edu

More information

Australian National University WORKSHOP ON SYSTEMS AND CONTROL

Australian National University WORKSHOP ON SYSTEMS AND CONTROL Australian National University WORKSHOP ON SYSTEMS AND CONTROL Canberra, AU December 7, 2017 Australian National University WORKSHOP ON SYSTEMS AND CONTROL A Distributed Algorithm for Finding a Common

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Optimization and Analysis of Distributed Averaging with Short Node Memory

Optimization and Analysis of Distributed Averaging with Short Node Memory Optimization and Analysis of Distributed Averaging with Short Node Memory Boris N. Oreshkin, Mark J. Coates and Michael G. Rabbat Telecommunications and Signal Processing Computer Networks Laboratory Department

More information

Zero controllability in discrete-time structured systems

Zero controllability in discrete-time structured systems 1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state

More information

Combinatorial semigroups and induced/deduced operators

Combinatorial semigroups and induced/deduced operators Combinatorial semigroups and induced/deduced operators G. Stacey Staples Department of Mathematics and Statistics Southern Illinois University Edwardsville Modified Hypercubes Particular groups & semigroups

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Discrepancy Theory in Approximation Algorithms

Discrepancy Theory in Approximation Algorithms Discrepancy Theory in Approximation Algorithms Rajat Sen, Soumya Basu May 8, 2015 1 Introduction In this report we would like to motivate the use of discrepancy theory in algorithms. Discrepancy theory

More information

Randomized Gossip Algorithms for Solving Laplacian Systems

Randomized Gossip Algorithms for Solving Laplacian Systems Randomized Gossip Algorithms for Solving Laplacian Systems Anastasios Zouzias and Nikolaos M. reris Abstract We consider the problem of solving a Laplacian system of equations Lx = b in a distributed fashion,

More information

r-robustness and (r, s)-robustness of Circulant Graphs

r-robustness and (r, s)-robustness of Circulant Graphs r-robustness and (r, s)-robustness of Circulant Graphs James Usevitch and Dimitra Panagou Abstract There has been recent growing interest in graph theoretical properties known as r- and (r, s)-robustness.

More information

Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices

Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices Florence Bénézit ENS-INRIA, France Vincent Blondel UCL, Belgium Patric Thiran EPFL, Switzerland John Tsitsilis MIT, USA Martin

More information

Mathematical Foundations

Mathematical Foundations Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D

Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D Byung-Hun Lee and Hyo-Sung Ahn arxiv:1783591v1 [cssy] 1 Aug 17 Abstract In this paper, we propose a novel

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Consensus Protocols for Networks of Dynamic Agents

Consensus Protocols for Networks of Dynamic Agents Consensus Protocols for Networks of Dynamic Agents Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 e-mail: {olfati,murray}@cds.caltech.edu

More information

CTA diffusion based recursive energy detection

CTA diffusion based recursive energy detection CTA diffusion based recursive energy detection AHTI AINOMÄE KTH Royal Institute of Technology Department of Signal Processing. Tallinn University of Technology Department of Radio and Telecommunication

More information

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient. ENGI 940 Lecture Notes - Matrix Algebra Page.0. Matrix Algebra A linear system of m equations in n unknowns, a x + a x + + a x b (where the a ij and i n n a x + a x + + a x b n n a x + a x + + a x b m

More information

A Note to Robustness Analysis of the Hybrid Consensus Protocols

A Note to Robustness Analysis of the Hybrid Consensus Protocols American Control Conference on O'Farrell Street, San Francisco, CA, USA June 9 - July, A Note to Robustness Analysis of the Hybrid Consensus Protocols Haopeng Zhang, Sean R Mullen, and Qing Hui Abstract

More information

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies Shreyas Sundaram and Christoforos N Hadjicostis Abstract We present a method for achieving consensus in distributed systems in

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

A New Spectral Technique Using Normalized Adjacency Matrices for Graph Matching 1

A New Spectral Technique Using Normalized Adjacency Matrices for Graph Matching 1 CHAPTER-3 A New Spectral Technique Using Normalized Adjacency Matrices for Graph Matching Graph matching problem has found many applications in areas as diverse as chemical structure analysis, pattern

More information

Distributed Optimization over Random Networks

Distributed Optimization over Random Networks Distributed Optimization over Random Networks Ilan Lobel and Asu Ozdaglar Allerton Conference September 2008 Operations Research Center and Electrical Engineering & Computer Science Massachusetts Institute

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

DIFFUSION-BASED DISTRIBUTED MVDR BEAMFORMER

DIFFUSION-BASED DISTRIBUTED MVDR BEAMFORMER 14 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIFFUSION-BASED DISTRIBUTED MVDR BEAMFORMER Matt O Connor 1 and W. Bastiaan Kleijn 1,2 1 School of Engineering and Computer

More information

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 4, AUGUST

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 4, AUGUST IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 4, AUGUST 2011 833 Non-Asymptotic Analysis of an Optimal Algorithm for Network-Constrained Averaging With Noisy Links Nima Noorshams Martin

More information

Chapter 2 Spectra of Finite Graphs

Chapter 2 Spectra of Finite Graphs Chapter 2 Spectra of Finite Graphs 2.1 Characteristic Polynomials Let G = (V, E) be a finite graph on n = V vertices. Numbering the vertices, we write down its adjacency matrix in an explicit form of n

More information

Anytime Planning for Decentralized Multi-Robot Active Information Gathering

Anytime Planning for Decentralized Multi-Robot Active Information Gathering Anytime Planning for Decentralized Multi-Robot Active Information Gathering Brent Schlotfeldt 1 Dinesh Thakur 1 Nikolay Atanasov 2 Vijay Kumar 1 George Pappas 1 1 GRASP Laboratory University of Pennsylvania

More information

Spectral Generative Models for Graphs

Spectral Generative Models for Graphs Spectral Generative Models for Graphs David White and Richard C. Wilson Department of Computer Science University of York Heslington, York, UK wilson@cs.york.ac.uk Abstract Generative models are well known

More information

Cooperative Communication in Spatially Modulated MIMO systems

Cooperative Communication in Spatially Modulated MIMO systems Cooperative Communication in Spatially Modulated MIMO systems Multimedia Wireless Networks (MWN) Group, Department Of Electrical Engineering, Indian Institute of Technology, Kanpur, India {neerajv,adityaj}@iitk.ac.in

More information

Power Grid Partitioning: Static and Dynamic Approaches

Power Grid Partitioning: Static and Dynamic Approaches Power Grid Partitioning: Static and Dynamic Approaches Miao Zhang, Zhixin Miao, Lingling Fan Department of Electrical Engineering University of South Florida Tampa FL 3320 miaozhang@mail.usf.edu zmiao,

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

Asymmetric Randomized Gossip Algorithms for Consensus

Asymmetric Randomized Gossip Algorithms for Consensus Asymmetric Randomized Gossip Algorithms for Consensus F. Fagnani S. Zampieri Dipartimento di Matematica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, 10129 Torino, Italy, (email: fabio.fagnani@polito.it).

More information

Least square joint diagonalization of matrices under an intrinsic scale constraint

Least square joint diagonalization of matrices under an intrinsic scale constraint Least square joint diagonalization of matrices under an intrinsic scale constraint Dinh-Tuan Pham 1 and Marco Congedo 2 1 Laboratory Jean Kuntzmann, cnrs - Grenoble INP - UJF (Grenoble, France) Dinh-Tuan.Pham@imag.fr

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

Quantized average consensus via dynamic coding/decoding schemes

Quantized average consensus via dynamic coding/decoding schemes Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec 9-, 2008 Quantized average consensus via dynamic coding/decoding schemes Ruggero Carli Francesco Bullo Sandro Zampieri

More information

Math/CS 466/666: Homework Solutions for Chapter 3

Math/CS 466/666: Homework Solutions for Chapter 3 Math/CS 466/666: Homework Solutions for Chapter 3 31 Can all matrices A R n n be factored A LU? Why or why not? Consider the matrix A ] 0 1 1 0 Claim that this matrix can not be factored A LU For contradiction,

More information

A Scheme for Robust Distributed Sensor Fusion Based on Average Consensus

A Scheme for Robust Distributed Sensor Fusion Based on Average Consensus A Scheme for Robust Distributed Sensor Fusion Based on Average Consensus Lin Xiao Center for the Mathematics of Information California Institute of Technology Pasadena, CA 925-9300 Email: lxiao@caltech.edu

More information

Math 308 Practice Final Exam Page and vector y =

Math 308 Practice Final Exam Page and vector y = Math 308 Practice Final Exam Page Problem : Solving a linear equation 2 0 2 5 Given matrix A = 3 7 0 0 and vector y = 8. 4 0 0 9 (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

A Graph-Theoretic Characterization of Controllability for Multi-agent Systems

A Graph-Theoretic Characterization of Controllability for Multi-agent Systems A Graph-Theoretic Characterization of Controllability for Multi-agent Systems Meng Ji and Magnus Egerstedt Abstract In this paper we continue our pursuit of conditions that render a multi-agent networked

More information

Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters

Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters Alkan Soysal Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland,

More information

Queue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling

Queue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling 1 Queue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling Saswati Sarkar and Koushik Kar Abstract In this paper, we consider queue-length stability in wireless

More information

Links Failure Analysis of Averaging Executed by. Protocol Push sum

Links Failure Analysis of Averaging Executed by. Protocol Push sum Contemporary Engineering Sciences, Vol. 9, 1, no., 7-3 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.19/ces.1.113 Links Failure Analysis of Averaging Executed by Protocol Push sum Martin Kenyeres Dept.

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:??? MIT 6.972 Algebraic techniques and semidefinite optimization May 9, 2006 Lecture 2 Lecturer: Pablo A. Parrilo Scribe:??? In this lecture we study techniques to exploit the symmetry that can be present

More information

An Optimization-based Approach to Decentralized Assignability

An Optimization-based Approach to Decentralized Assignability 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract

More information

arxiv: v2 [eess.sp] 20 Nov 2017

arxiv: v2 [eess.sp] 20 Nov 2017 Distributed Change Detection Based on Average Consensus Qinghua Liu and Yao Xie November, 2017 arxiv:1710.10378v2 [eess.sp] 20 Nov 2017 Abstract Distributed change-point detection has been a fundamental

More information

arxiv: v2 [cs.dc] 2 May 2018

arxiv: v2 [cs.dc] 2 May 2018 On the Convergence Rate of Average Consensus and Distributed Optimization over Unreliable Networks Lili Su EECS, MI arxiv:1606.08904v2 [cs.dc] 2 May 2018 Abstract. We consider the problems of reaching

More information

Optimization and Analysis of Distributed Averaging with Short Node Memory

Optimization and Analysis of Distributed Averaging with Short Node Memory Optimization and Analysis of Distributed Averaging with Short Node Memory 1 arxiv:0903.3537v4 [cs.dc] 5 Feb 2010 Boris N. Oreshkin, Mark J. Coates and Michael G. Rabbat Telecommunications and Signal Processing

More information

Matrices and Determinants

Matrices and Determinants Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Fundamentals of Matrices

Fundamentals of Matrices Maschinelles Lernen II Fundamentals of Matrices Christoph Sawade/Niels Landwehr/Blaine Nelson Tobias Scheffer Matrix Examples Recap: Data Linear Model: f i x = w i T x Let X = x x n be the data matrix

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information