Clock skew and offset estimation from relative measurements in mobile networks with Markovian switching topology [Technical Report]

Size: px
Start display at page:

Download "Clock skew and offset estimation from relative measurements in mobile networks with Markovian switching topology [Technical Report]"

Transcription

1 Clock skew and offset estimation from relative measurements in mobile networks with Markovian switching topology [Technical Report] Chenda Liao, Prabir Barooah Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611, USA Abstract We propose a distributed algorithm for estimation of clock skew and offset of the nodes of a mobile network. The problem is cast as an estimation from noisy relative measurements. The time variation of the network was modeled as a Markov chain. The estimates are shown to be mean square convergent under fairly weak assumptions on the Markov chain, as long as the union of the graphs is connected. Expressions for the asymptotic mean and correlation are also provided. The Markovian switching topology model of mobile networks is justified for certain node mobility models through empirically estimated conditional entropy measures. Keywords: estimation sensor networks, mobile networks, time synchronization, distributed 1. Introduction We consider the problem of estimation of variables in a network of mobile nodes in which pairs of communicating nodes can obtain noisy measurement of the difference between the variables associated with them. Specifically, suppose the u-th node of a network has an associated node variable x u R. If nodes u and v are neighbors at This work has been supported by the National Science Foundation by Grants CNS and ECCS Author addresses: {cdliao,pbarooah}@ufl.edu. Preprint submitted to Elsevier October 1, 2011

2 discrete time index k, then they can obtain a measurement ζ u,v (k) where ζ u,v (k) = x u x v + ɛ u,v (k). (1) The problem is for each node to estimate its node variable from the relative measurements it collects over time, without requiring any centralized information processing or coordination. We assume that at least one node knows its variable. Otherwise the problem is indeterminate up to a constant. A node that knows its node variable is called a reference node. All nodes are allowed to be mobile, so that their neighbors may change with time. The problem of time synchronization (also called clock-synchronization) through clock skew and offset estimation falls into this category, and provides the main motivation for the study. The relationship between local clock time τ u (t) of node u and global time t is usually modeled as τ u (t) = α u t + β u. (2) where the scalars α u, β u are called its skew and offset, respectively. A node can determine the global time t from its local clock time by using the relationship t = (τ u (t) β u )/α u as long as it knows the skew and offset of its local clock. Hence the problem is clock synchronization in a network can be alternatively posed as the problem of nodes estimating their skews and offsets. It is not possible for a node to measure its skew and offset directly. However, it is possible for a pair of neighbors to measure the difference between their offsets and logarithm of skews by exchanging a number of time stamped messages. Existing protocols to perform so-called pairwise synchronization, such as [1, 2, 3, 4], can be used to obtain such relative measurements. The details will be described in Section 2. The problem of clock offset and skew estimation can therefore be cast as a special case of the estimation from relative measurements described above. If an algorithm is available to solve the scalar node variable estimation problem, nodes can execute two copies of this algorithm to estimate both skew and offset. Therefore we only consider the scalar case. In the context of time synchronization, the existence of a reference node means that at least one node has access to the global time t. This is the case, when either at least one node is equipped with a GPS receiver or one node is elected to be a root so that it s local clock time is considered the global time that everyone has to synchronize to. Time synchronization in ad-hoc networks, especially in wireless sensor networks, has been a topic of intense study in recent years. The utility of data collected and transmitted by sensor nodes depend directly on the accuracy of the time-stamps. In 2

3 TDMA based communication schemes, accurate time synchronization is required for the sensors to communicate with other sensors. Operation on a pre-scheduled sleepwake cycle for energy conservation and lifetime maximization also requires accurate knowledge of the global time. We refer the interested reader to the review papers [5, 6] and to the more recent works [2, 7, 8, 9] for more details on time synchronization. Most of the time synchronization protocols developed for sensor networks are primarily targeted to networks of static, or quasi-static nodes. A common approach is to first elect a root node and construct a spanning tree of the network with the root node being the level 0 node. Every node thereafter synchronizes itself to a node of lower level (higher up in the hierarchy). This synchronization is done through message passing between the pair of nodes to estimate their relative skew and/or offset. Precise definitions of relative skew and relative offset will be given in Section 2. Examples of such spanning-tree based protocols include the widely used Timing-Sync Protocol for Sensor Networks (TPSN) [10] and Flooding Time Synchronization Protocol (FTSP) [11]. Change in the network topology due to node mobility or node failure requires recomputing the spanning tree and sometimes even re-election of the root node. This adds considerable communication overhead. The situation gets worse if nodes move rapidly. A number of papers have proposed fully distributed algorithms that do not rely on establishing a hierarchy among the nodes. Distributed protocols for estimation of clock skews and/or offsets from measurements of relative skews and offsets between pairs of nodes have been proposed in [12, 13, 14, 9]. These algorithms were also targeted to networks of static nodes. The analysis of their correctness and performance, including robustness to nodes and link failure, that is reported in [13, 15, 16, 9] have also been limited to static networks. In this paper we focus on distributed estimation of skews and offsets in mobile networks in which the network changes with time due to the motion of the nodes. With some changes, the algorithms proposed in [12, 13, 14, 9] can be adapted to mobile networks. However, little is known about how such an algorithm will perform in a mobile network. We propose such a modification and analyze the convergence of the algorithm.the algorithm we propose can be used by the nodes of a mobile network to estimate their clock skews and offsets and thus perform time synchronization. The network topology is continually changing due to the motion of the nodes, as well as random communication failure. We model the resulting time-varying topology of the network as the state of a Markov chain. Techniques for the analysis of jump linear systems from [17] are used to study convergence of the algorithm. We show that under fairly weak assumptions on the Markov chain, the proposed algorithm is mean square convergent if and only if the union of the graphs that occur is connected. 3

4 Mean square convergence means the expected value and the variance of the estimates obtained by each node converges to fixed values that do not depend on the initial conditions. When the relative measurements are unbiased, then limiting mean is the same as the true value of the variable, meaning the estimates obtained are asymptotically unbiased. Formulas for the limiting mean and variance are obtained by utilizing results from jump linear systems. Another contribution of the paper is to provide justification for the Markovian switching topology for mobile networks. The Markovian switching model has also been used extensively in studying consensus protocols in networks with dynamic topologies [18, 19, 20, 21, 22]. For a network of static nodes with link drops, the Markovian switching model arises naturally from Markovian link drop model. In mobile networks, though, the only case where we can prove that a mobile network evolves according to a Markov chain is when nodes move according to the so-called random walk mobility model [23]. Although the Markovian switching assumption facilitates analysis, this assumption requires justification. We use a technique from [24] to check if the graph switching with the so-called Random Waypoint Mobility (RWP) model is Markovian. The RWP model is one of the most widely used mobility models for ad-hoc mobile networks [23]. We show that the resulting graph switching process can indeed be approximated well by a (first order) Markovian switching model. The error dynamics of the estimation algorithm turns out to be a consensus-type algorithm with leader(s), where the leader states - corresponding to the error of the reference nodes - are always 0. In Remark 1 in Section 4, we comment on the similarities and differences between the scenario examined here and prior work on consensus with Markovian switching topology in [21, 22]. The rest of the paper is organized as follows. Section 2 describes the connection between the problem of estimation from relative measurements and the problem of skew/offset estimation. Section 3 states the problem precisely, describes the proposed algorithm, and states the main result (Theorem 1). It also discusses the relevance of the Markovian switching topology model. Section 4 is devoted to the proof of the theorem. Simulation studies are presented in Section Relative measurements of skew and offset We describe here how skew and offset estimation can be posed in the form of the general problem of estimation from relative measurements. A number of methods are available in the literature that allows pairwise synchronization between a node pair from time-stamped messages [1, 3, 25]. Specifically, in pairwise synchronization node u estimates the parameters α u,v and β u,v that allows it to determine its local 4

5 time in terms of the local time at node v at the same instant, where τ u (t) = α u,v τ v (t) + β u,v. (3) The parameters α u,v and β u,v are referred to as the skew and offset of node u with respect to node v [4], and also as the relative skew and relative offset between nodes u and v [2]. In a mobile network, nodes u and v can perform pairwise synchronization if they are in each others communication range long enough to exchange a number of time stamped messages. The minimum number of messages that have to be exchanged is 4 in the protocol proposed in [1]. With more packets exchanged, the accuracy of the estimates improves. To understand the relationship between α u,v, β u,v and α u, β u, α v, β v, we express the local time τ u (t) of node u at global time t in terms of the local time τ v (t) at node v at the same time t by using (1): τ u (t) = α u ( τ v(t) β v ) + β u = α u α u τ v (t) + β u β v, α v α v α v From this and (3) we obtain the relationship α u,v := α u α v β u,v := β u β v α u α v. (4) Suppose a node u obtains noisy estimates ˆα u,v, ˆβ u,v of the parameters α u,v, β u,v by using a pairwise synchronization protocol. We model the noisy estimate of α u,v as ˆα u,v = α u,v exp(ɛ s u,v) (5) where ɛ s u,v is a random variable that captures the error between the estimate and the true value of α u,v. If the estimation error is small, then ɛ s u,v is close to 0. Taking log on both sides of (4), we obtain With the definitions log ˆα u,v = log α u,v + ɛ s u,v = log α u log α v + ɛ s u,v. (6) ζ s u,v := log ˆα u,v, x s i := log α u, we see that (6) can be rewritten as ζ s u,v = x s u x s v + ɛ s u,v, (7) 5

6 which makes ζ s u,v a noisy relative measurement of the node variables x s u and x s v; cf. (1). It is important to notice that ζ s u,v is a measured quantity since ˆα u,v is measured while the variables x s u, x s v are unknown. Similarly, the noisy estimate ˆβ u,v of β u,v with random estimation error e o u,v can be written as where With the definitions ˆβ u,v = β u,v + e o u,v = β u α u α v β v + e o u,v = β u β v + ɛ o u,v, (8) ɛ o u,v := β v (1 α u α v ) + e o u,v. ζ o u,v := ˆβ u,v, x o u := β u, we see that (8) can be rewritten as ζ o u,v = x o u x o v + ɛ o u,v, (9) which again falls under the category of relative measurements of node variables, i.e., has the form (1). In this case the node variables are the clock offsets β u s. The noise ɛ o u,v in the offset measurement is in general biased even if the estimate of the parameter β u,v is unbiased. This discussion shows that the problem of estimating the skews and offsets of all the clocks in a network can be posed as an estimation from relative measurements problem, where relative measurements are of the form (1). Once node u obtains estimates ˆx s u and ˆx o u of its node variables x s u and x o u, estimates of its skew and offset can be obtained as ˆα u := exp(ˆx s u) and ˆβ u := ˆx o u. 3. Problem statement, proposed algorithm and results Since both relative offset and skew ratio measurements are special cases of noisy relative measurements of scalar variables, we only consider the problem of estimating the scalar node variables x u, u = 1,..., n, where n is the number of nodes in the network that do not know their node variables. We assume that there are n r additional nodes that knows their node variables, n r has to be at least one to make the problem well-posed. The total number of sensor nodes in the network is therefore n total = n + n r. These define a node set V = {1,..., n total }. Time is measured by a discrete time-index k = 0, 1,.... The mobile nodes define a time-varying undirected 6

7 measurement graph G(k) = (V, E(k)), where (u, v) E(k) if and only if u and v can obtain a relative measurement of the form (1) during the time interval between the time indices k and k + 1. Specifically, for each (u, v) E(k), there is a measurement ζ u,v (k) = x u x v + ɛ u,v (k) that is available to both u and v at time k. In practice, such a measurement is usually obtained by one of the node nodes, though the other node takes part in the process by returning time stamped messages in response to messages sent by the first node. We assume that if u gets all the time stamps necessary to obtain a measurement of x u x v first (between u and v), it computes the measurement ζ u,v and then sends this measurement to v so that v also has access to the same measurement. We follow the convention that the relative measurement between u and v that is obtained by the node u is always of x u x v while that used by v is always of x v x u. Since the same measurement is shared by a pair of neighboring nodes, if v receives the measurement ζ v,u from u, then it converts the measurement to ζ v,u by assigning ζ v,u (k) := ζ u,v (k). The neighbors of u at k, denoted by N u (k), is the set of nodes that u has an edge with in the measurement graph G(k). Since multiple packet transmissions have to occur to obtain a measurement, if (u, v) E(k), then u and v can also communicate and exchange information at time k. Therefore, if one prefers to think of a communication graph, it is the same as the measurement graph. The problem is to estimate the node variables x u for u = 1,..., n by using the relative measurements ζ u,v (k), (u, v) E(k) that becomes available over time k = 0, 1,.... In addition, the algorithm has to be distributed in the sense that each node has to estimate its own variables, and at every time k, a node u can only exchange information with its neighbors N u (k) Distributed estimation from relative measurement In the proposed algorithm, each node u maintains in its local memory an estimate ˆx u (k) R of its node variable x u. Every node - except the reference nodes - iteratively updates its estimate as we ll describe now. The estimates can be initialized to arbitrary values. In executing the algorithm at iteration k, node u communicates with its current neighbors to obtain their current estimates ˆx v (k), v N u (k). It also collects the measurements ζ u,v (k) for each v N u (k) during this time. Since obtaining measurements require exchanging time-stamped messages, the current estimates can be easily exchanged during the process of obtaining new measurements. Node u then updates its estimate according to ˆx u (k + 1) = w uu(k)ˆx u (k) + v N w u(k) vu(k)(ˆx v (k) + ζ u,v (k)) w uu (k) + v N w, (10) u(k) vu(k) 7

8 where the weights w vu (k) are arbitrary positive numbers. Nodes continue this iterative update indefinitely unless they see little change in their local estimates, at which point they can stop updating. If u is a reference node then ˆx u (k) = x u for all k. The update law is well-defined even at times when u has no neighbors. The rationale behind the update law (10) is that ˆx v (k) + ζ u,v (k) = x u + (ˆx v (k) x v ) + ɛ u,v (k), (11) so that each term inside the summation on the right hand side of (10) is an estimate of x u, with (ˆx v (k) x v ) + ɛ u,v (k) being the estimation error. The right hand side of (10) is therefore an weighted average of multiple noisy estimates of x u. If the measurement noise ɛ u,v (k) is zero-mean and the initial estimates are unbiased, i.e. E[ˆx u (0)] = x u for each u, then the right hand side of (11) is an unbiased estimate of x u for all k. Each node u is allowed to vary its local weights w u,v (k) with time and use distinct weights for distinct neighbors to account for the heterogeneity in measurement quality. Between two neighbors p, q of u at time k, the relative measurement ζ u,p (k) between u and p may have lower measurement error than the relative measurement ζ u,q (k) between u and q. This occurs, for example, if u and p were able to exchange more time stamped messages than u and q before computing the relative measurements [3, 4]. In that case, node u should choose its local weights at k so that w u,p (k) > w u,q (k). Due to the denominator in (10), it is only the ratios among the weights that matter, not their absolute values. The update law (10) has similarities to the update laws in [9, 12, 13, 14] that were proposed for static networks. The difference is the robustness of the update law to the case when there are no neighbors, and the inclusion of the time-varying and heterogeneous weights Asynchronous implementation The description so far is in terms of a common global iteration index k. In practice, nodes do not have access to such a global index. Instead, each node u keeps a local iteration index k u. After every increment of the local index, the node tries to collect a new set of relative measurements with respect to one or more of its neighbors within a pre-specified time interval. At the end of the time interval, whether it is able to get new measurements or not, it updates its estimate according to the update law (10) - with k replaced by k u in it - and increments its local iteration counter. The process then repeats. It follows from (10) that if a node is unable to gather new measurements from any neighbors, then its updated estimate is precisely the previous estimate. 8

9 The global iteration index is useful to describe the algorithm from the point of view of an omniscient spectator. Let T the time interval, say, in seconds, between two successive increments of the global index k. The parameter T is arbitrary, as long as is small enough so that no node updates its local estimate more than once with the time interval T. In that case, one of only two events are possible for an arbitrary node u at the end of the time interval when the global counter is increased from k to k+1: (i) u either increases its local index by one, or (ii) u does not increases its local index. If a node increases its local index, both the local and global indices increase by one. A node does not increase its local iteration index if it is not able to gather new measurements. In the omniscient spectator s view, the node s neighbor set is empty at this time index; so according to (10), the next estimate of the node s variable is the same as the previous one. Thus, a node s local asynchronous state update can be described in terms of the synchronous algorithm (10); the latter being more convenient for the purpose of exposition. In the remainder of the paper we consider only the synchronous version Convergence with Markovian switching In this paper we model the sequence of measurement/communication graphs {G(k)} k=0 that appear as time progresses as the realization of a (first order) Markov chain, whose state space G = {G 1,..., G N } is the set of graphs that can occur over time. In general, a stochastic process X(k) is called l-th order Markov if P(X(k + 1) X(k), X(k 1),... ) = P(X(k + 1) X(k), X(k 1),..., X(k l + 1)), where P( ) denotes probability. The Markovian switching assumption on the graphs means that P(G(k + 1) = G i G(k) = G j ) = P(G(k + 1) = G i G(k) = G j, G(k 1) = G l,..., G(0) = G p ) where G i, G j, G l,..., G p G. We assume that the Markov chain is homogeneous, and denote the transition probability matrix of the chain by P, p ij being the (i, j)-th entry of P. Further discussion on Markov modeling of graphs is postponed till Section 3.3. The main result of the paper - on the mean square convergence of the algorithm (10) - is stated below as a theorem. In the statement of theorem, e(k) := ˆx(k) x is the estimation error, where x := [x 1,..., x n ] T and ˆx(k) := [ˆx 1 (k),..., ˆx n (k)] T are the vectors of all the node variables and their estimates. Moreover, µ(k) := E[e(k)] is the mean and Q(k) := E[e(k)e(k) T ] is the correlation matrix of the estimation error vector, where E[ ] denotes expectation. We say that a stochastic process y(k) is mean square convergent if E[y(k)] and E[y(k)y T (k)] converges as k for every initial condition. The union graph Ĝ is defined as follows: Ĝ := N i=1g i = (V, N i=1e i ). (12) 9

10 We assume that the measurement noise ɛ u,v (k) affecting the measurements on the edge (u, v) is a wide sense stationary process with mean E[ɛ u,v (k)] = µ u,v and variance Cov(ɛ u,v (k), ɛ u,v (k)) = σ 2 u,v. We also assume that the measurement noise sequence ɛ(k) and the initial condition ˆx(0) is independent of the Markov chain that governs the time-variation graphs. Due to technical reasons, we make an additional assumption that there exists a time k 0 after which the weight between two nodes do not change. That is, if i and j become neighbors at two distinct times k 1 and k 2 (both greater than k 0 ), then w ij (k 1 ) = w ij (k 2 ) as well as w ji (k 1 ) = w ji (k 2 ). There are various ways nodes can achieve this. One simple way is for every node to use a constant weight w for all its neighbors. Another possibility is for every node u to store in its local memory the weight w u,v (k v ) that is used for the neighbor v at k v, where k v is the last time it encountered v. After a certain time, nodes stop changing the weights but use the weight it used the previous time it encountered the same neighbor. In this case, a node has to store at most n total numbers, one for each potential neighbors. The choice of weights during the transient period (up to k 0 ) will affect initial reduction of the estimation errors but will not change the asymptotic behavior. Theorem 1. Assume that the temporal evolution of the communication graph G(k) is governed by an N-state homogeneous Markov chain that is ergodic, and p ii > 0 for i = 1,..., N. The estimation error e(k) is mean square convergent if and only if Ĝ is connected. If additionally all the measurements are unbiased, then the estimates are asymptotically unbiased: lim k µ(k) = 0. The implication of the theorem is that as long as nodes are connected in a timeaverage sense characterized by Ĝ being connected, the nodes have estimates whose mean and variance converges as time progresses irrespective of the initial conditions. Thus, after a sufficiently long time, the nodes can turn off the synchronization updates without much loss of accuracy. The assumption of ergodicity of the Markov chain ensures that there is an unique steady state distribution and that the steady state probability of each state is non-zero [17]. This means every graph in the state space of the chain occurs infinitely often. Since their union graph is connected, ergodicity implies that information from the reference node(s) will flow to each of the nodes over time. None of the graphs that ever occur is required to be a connected graph. The assumption p ii > 0 means there is a non-zero probability that graph stays the same from one iteration index to the next, which is equivalent to the following: if u and v are able to collect a relative measurement at k, they are able to do so again at k + 1 with a non-zero probability. This can be assured if the nodes move 10

11 slowly enough. Therefore the assumption p ii > 0 is an assumption on the rapidity of node motion. Though the theorem states that the limiting mean and variance exist, it does not specify what they are. In fact, a formula for the limiting mean and variance is provided later in Lemma 3 in Section Markovian model of topology change Here we examine the question of the applicability of the Markovian model of graph switching. An example in which the time variation of the graphs satisfies the homogeneous Markov model is a network of mobile agents whose motion is modeled with first order dynamics with range-determined communication. In ad-hoc networks literature this is referred to as the random walk mobility model [23]. Specifically, suppose the position of node u at time k, denoted by p u (k), is restricted to lie on the unit sphere S 2 = {x R 3 x = 1}, and suppose the position evolution obeys: p u (k + 1) = f(p u (k) + u (k)), where u (k) is a stationary zero-mean white noise sequence for every u, and E[ u (k) v (k) T ] = 0 unless u = v. The function f( ) : R 3 S 2 is a projection function onto the unit-sphere. In addition, (u, v) E(k) if and only if the geodesic distance between them is less than or equal to some predetermined value. In this case, the graph G(k) is uniquely determined by the node positions at time k, and the prediction of G(k + 1) given G(k) cannot be improved by the knowledge of the graphs observed prior to k: G(k 1),..., G(0). Hence the evolution of the graph sequence satisfies the Markovian property. If in addition random communication failure leads to two nodes not being able to communicate even when they are in range, the Markovian property is retained if the communication failure is i.i.d. Nevertheless, for some mobility models, it is not straightforward to check if the sequence of graphs generated by the model satisfies the Markovian property. A general method of checking Markovian switching of graphs is therefore needed. We borrow a method that is proposed in [24] to check if a stochastic process is Markov from observations of the process. We first introduce some standard notation from information theory. Let X be a discrete random variable with sample space Ω = {1,..., N} and probability mass function p(x) = P(X = x), where x Ω. The entropy of X is defined by H(X) = x Ω p(x) log P (x). (13) The definition of entropy is extended to a pair of random variable X, Y, where 11

12 H 1 H 1 H i i i (a) Independent (b) First-order (c) Second-order Figure 1: The standard shapes of estimated conditional entropy for three different cases: (a) independence, (b) first-order dependence and (c) second-order dependence. If a process is a first order Markov chain, empirically computed conditional entropy will show a trend similar to the one in (b). X, Y Ω, as follows H(X, Y ) := x,y Ω p(x, y) log p(x, y). (14) The conditional entropy H(Y X) is defined as H(Y X) := p(x, y) log p(x y) = H(X, Y ) H(X). (15) x,y Ω The conditional entropy measures the conditional uncertainty about an event given the another event. Consider a stochastic process {X 1, X 2,... }. Assuming the process is stationary, we denote H(single) := H(X k ) and H(double) := H(X k, X k+1 ), for all k = 1,.... It is straightforward to show that H(double) = 2H(single) if the successive random variables X k are i.i.d. In this case the random process is a zero-order Markov process. If the random variables X k are not independent, H < H(double) < 2H. To address the question of whether it is second-order or higher order Markov, we extend the entropy definition to multivariate random variables such that H(triple) := H(X k, X k+1, X k+2 ), etc. Now, the sequence H 0 = log(n), H 1 = H(single), H 2 = H(double) H(single) and H 3 = H(triple) H(double), etc, measures the conditional uncertainty for each order of dependence. A graphical approach is given in [24] to determine the order of dependence of a random process by plotting the estimates of each H i, where i = 1, 2,... and examining the shape of the curve. The estimate of each H i can be calculated from observations. Figure 1 shows the standard shapes for independence, first-order dependency, and secondorder dependency. If the process is independent, then knowing the value of X k will 12

13 not help in predicting X k+1, which is seen in the flat shape of the entropy function in Figure 1(a). In contrast, the sharp drop from Ĥ1 to Ĥ2 in Figure 1(b) indicates that knowing the value of X k 1 will dramatically decrease the uncertainty in the prediction of X k, while the values of X k 2, X k 3,... will not help much. This accords with the dependence property of a first-order Markov chain. Similarly, Figure 1(c) indicates that the previous two variables X k 2, X k 1 are both important to predict X k. In this case the process is better modeled as a second order Markov chain. In order to conclude whether the evolution of graphs is governed by a first-order Markov chain, we adopt the method discussed above as follows. For a particular mobility model, we conduct a simulation and collect observations of the graph sequence. Since the underlying sample space G of the stochastic process {G k } is finite, the method described above is applicable. We then use the approach above to check whether the plot of Ĥi estimated from the collected observations is closer to that in Figure 1(b) than to those in Figure 1(a) or Figure 1(c). If so, we declare that it is reasonable to model the graph switching process as Markovian. As an illustrative example, we consider the widely used random waypoint (RWP) mobility model [23]. In the RWP model, each node is initialized to stay in its initial position for a certain period of time (so called pause time t p ). Then, the node picks a random destination within the region it is allowed to move and a speed that is uniformly distributed in [v min, v max ]. Once node reaches the new destination, it pauses again for t p before starting over. We conduct a simulation of the RWP model with 3 nodes, where v min, v max, t p are chosen as 10 m/s, 50 m/s and 0.1 s. The nodes are allowed to move in a region m. Nodes positions are initialized randomly. The sample space consists of 8 graphs. By performing the simulation for a long time (10 4 s), we obtain a large number of observations of the process {G(k)}. The probability mass function is empirically estimated from the observations. For estimating conditional entropies, certain conditional probabilities, especially those of the type P (G(k) = G 1 G(k 1) = G 2, G(k 2) = G 3 ), are problematic since the relevant events may not be observed even in a very long sequence of observations. In this case we set the corresponding probabilities to 0 and use 0 log 0 = 0. The empirically estimated conditional entropies Ĥi are shown in Figure 2. Clearly, the shape of curve is similar to that in Figure 1(b). Therefore, we conclude that the graph switching process in RWP mobility can be reasonably modeled as a (first order) Markov chain. Note that in RWP mobility, prediction of the future node locations (and therefore the graph) based on knowledge of past and present may be more accurate instead of prediction based on only the present. Therefore it is quite possible that the graph 13

14 H i Figure 2: Empirically estimated conditional entropy for the graph process {G k } with three nodes moving according to the random waypoint mobility model. switching is not first-order Markov. However, the results of the test above shows that a Markov model quite accurately captures the graph switching process with RWP mobility. 4. Proof of Theorem 1 We first introduce some standard and non-standard graph-theoretic terminology. Consider a directed graph G = (V, E) with n total nodes, m edges, and positive weights assigned to every edge. The weight matrix W is a diagonal matrix of the edge weights: W := diag(w 1,..., w m ), where w i s are weights in the set {w u,v (u, v) E} indexed from 1 to m. The pair ( G, W ) is called a directed weight graph. See Figure 3 for an example of an undirected measurement graph and an associated directed weighted graph. The node-edge incidence matrix A of the directed graph G is a n total m matrix whose entries are defined as follows: A u,e = 0 if the edge with index e is not incident on node u; A u,e = +1 if e is directed away from u and A u,e = 1 if e is directed toward u. The weighted in-directed Laplacian matrix L in of the weighted directed graph ( G, W ) is a n total n total matrix defined as L in u,v = w v,u for u v and L in u,u = v u Lin u,v. In other words, the off-diagonal entries on the u-th row are the negative of the weights on the edges directed toward the node u, the diagonal entries are positive in such a way that each row sum is 0. We also define the n total m in-incidence matrix A in as the matrix A with the +1 entries removed. In the inincidence matrix, information about the tail nodes of edges is lost, only information on the heads of edges is retained. The following is straightforward to verify: L in = A in W A T. (16) The in-incidence matrix and the relation (16) appeared earlier in [26] under different names. Given a subset of r special nodes among the n total nodes, so that n = n total r, 14

15 1 2 3 Gw1 1 w1 w2w3 w5 w6 2 3 w4 Gw2 Figure 3: A measurement graph G and the corresponding weight graph ( G, W ), with 1 being the reference node. Note that edges pointing toward the reference nodes do not affect the algorithm since reference nodes do not update their estimates. the basis incidence matrix A b is the n m sub-matrix of A that is obtained by removing the rows corresponding to the n r special nodes [27]. Similarly, we define the basis in-incidence matrix A in b as the n m sub-matrix of A in that is obtained by removing the rows corresponding to the n r special nodes. Finally, we define the basis in-laplacian matrix L in b as the n n sub-matrix of L in that is obtained by removing the rows and columns corresponding to the special nodes. The sub-matrices of A, A in that are removed to obtain the basis matrices are denoted by A r, A in r, respectively, so that with appropriate indexing, [ ] [ ] Ar A A = A in in = r A b A in. (17) b The square non-negative matrices M, N are defined as follows: M is a diagonal matrix made up of the diagonal entries of L in, and N := M L in. Similarly, define M b, N b for L in b. In the language of iterative computation, M and N define a splitting of L in b [28]: L in b = M b N b. (18) The update law (10) can be expressed compactly in terms of the undirected communication graph G(k) = (V, E(k)) and a directed weight graph ( G(k) = (V, E(k)), W (k)), defined over the same node set but different edge sets even at the same time k. In particular, if u and v have an undirected edge in G(k), then we have two directed edges (u, v) and (v, u) in G. Thus, given a measurement graph G(k), the edges of the weight graph are uniquely defined. An edge (u, v) of G(k) has a positive weight w u,v (k) that is chosen by the node u; these weights are combined into a weight matrix W (k) R m(k) m(k), where m(k) is the number of edges in E(k). 15

16 The algorithm (10) also uses positive weights w uu (k), u V that are not part of the weight matrix W (k). The n total n total diagonal matrix D(k) is defined with these weights: D u,u = w u,u (k) for every u V. In this paper the special nodes used to define the basis incidence/in-incidence/in-laplacian matrices will be the reference nodes. The n n diagonal matrix D b (k) is a submatrix of D(k) obtained by removing those rows and columns corresponding to the reference nodes. Recall the convention established earlier: if (u, v) E(k) so that ζ u,v (k) is available to u, then the measurement ζ v,u (k) (which is simply ζ u,v (k)) is available to v. If we count the measurements available to every node as distinct measurements then the number of measurements at time k is m(k), the number of edges in the weight graph. These m(k) measurements are now indexed from 1 to m(k) and placed in a tall vector ζ(k) := [ζ 1 (k),..., ζ m(k) (k)] T. The measurements available to reference nodes, and the weights on edges pointing toward reference nodes, are not used since the reference nodes do not update their estimates. With the definitions introduced so far, the update law (10) can be compactly expressed as (M b (k) + D b (k))ˆx(k + 1) = (N b (k) + D b (k))ˆx(k) A in b (k)w (k)a T r (k)x r + A in b (k)w (k)ζ(k). (19) Recall that ˆx(k) = [ˆx 1 (k),..., ˆx n (k)] T. To obtain the dynamics for the estimation error e(k) = ˆx(k) x, we first note that if we define a vector of all node variables, known and unknown, as x all = [x T r, x T ] R n total, then we have the following relationship ζ(k) = A T (k)x all + ɛ(k) = A T r (k)x r + A T b (k)x + ɛ(k), (20) where ɛ(k) := [ɛ 1 (k),..., ɛ m(k) (k)] T is a measurement noise vector. Expanding the right hand side of (19) using (20), we obtain (M b (k) + D b (k))ˆx(k + 1) = (N b (k) + D b (k))ˆx(k) + A in b (k)w (k)a b (k) T x + A in b (k)w (k)ɛ(k) = (N b (k) + D b (k))ˆx(k) + (M b (k) N b (k))x + A in b (k)w (k)ɛ(k), (21) where the second equality follows from A in b W AT b = L in b from (16) and (18). We now subtract the equality = M b N b, which follows (M b (k) + D b (k))x = (N b (k) + D b (k))x + (M b (k) N b (k))x 16

17 from (21) and use e(k) = ˆx(k) x to obtain (M b (k) + D b (k))e(k + 1) = (N b (k) + D b (k))e(k) + A in b (k)w (k)ɛ(k). This is rewritten as where e(k + 1) = J b (k)ê(k) + B b (k)ɛ(k), (22) J b (k) := (M b (k) + D b (k)) 1 (N b (k) + D b (k)), B b (k) := (M b (k) + D b (k)) 1 A in b (k)w (k), In general, the weight W (k) at time k is not completely specified by the graph G(k) at that time if nodes are allowed to vary the weights arbitrarily over time. However, recall that an additional constraint was imposed on choosing weights, that there exists a time k 0 after which the weight between two nodes do not change. The result of imposing this constraint is that after the transient period, the graph G(k) uniquely determines the weight matrix W (k). Since there are N distinct graphs in G, we can define a set W := {W i,..., W N }, with W i associated with G i. As a result, for k k 0, if G(k) = G i then W (k) = W i. The dimensions of the quantities B b (k), ɛ(k) depend on the state of the Markov chain at k. Now we introduce some extended quantities to rectify this issue. Let m x be the number of edges in the union graph Ĝ (recall Ĝ = i G i ). For every graph G i G, define an extended incidence matrix A x i by introducing additional columns with all entries equal to 0 so that A x i is an n total m x matrix. Let A x b i the corresponding extended basis incidence matrix. Let M i, D i, W i (and M bi, D bi, W bi ) be the matrices M, D, W (and M b, D b, W b ) defined for the graph G i. For every W i, define Wi x R mx m x by introducing additional 0 entries on the diagonal for all edges that are not in G i. Now define Bbi x := (M bi + D bi ) 1 A x b i W i x R n mx. At every time k, we also define an extended noise vector ɛ x (k) R mx. The entries of ɛ x (k) corresponding to the edges that do not exist in G(k) are set to 0, as are their their means and variances. As a result, if G(k) = G i, then A x b (k)w x (k)ɛ x (k) = A x b i W i x ɛ x (k) = A bi W i ɛ(k) = A b (k)w (k)ɛ(k). With these choices, the state of the following system is identical to that of (22) for the same initial conditions: (23) e(k + 1) = J b θ(k) e(k) + B x b θ(k)ɛ x (k), k k 0 (24) where θ : Z + (G, W) is the switching process that is governed by the underlying Markov chain. The reason for the qualifier k k 0 is that weights are not limited to 17

18 the set W before k 0, so technically the matrices J b, Bb x are uniquely determined by the Markov chain only for k k 0. The error dynamics (24) is a Markov jump linear system (MJLS) [17]. The following terminology will be used: γ := E[ɛ x (k)] Γ := E[ɛ x (k), ɛ xt (k)] 0. (25) µ(k) := E[e(k)] Q(k) := E[e(k)e T (k)]. (26) To proceed with the analysis of the mean square convergence of (24), we need additional terminology. For a set of square matrices X i R l l, X ij R l l, i, j = 1,..., N, we define X 11 X X 1N diag[x i ] := X X N, [X ij ] := X 21 X X 2N X N1 X N2... X NN Both diag[x i ] and [X ij ] are ln ln matrices. We now define the matrices J i := (M i + D i ) 1 (N i + D i ) R n total n total, F i := J i J i R n2 total n2 total, J bi := (M bi + D bi ) 1 (N bi + D bi ) R n n, F bi := J i J i R n2 n 2 (27) where denotes the Kronecker product, M i, N i are the matrices forming the splitting (see (18)) of the in-laplacian L i of the graph G i, the the diagonal matrix D i s diagonal entries are the weights on the self-loops of G i, and M bi, N bi, D bi s are their corresponding basis matrices with the rows and columns associated with the reference nodes taken out. Furthermore, define the matrices C b := (P T I)diag[J bi ] = [p ji J bj ] R Nn Nn, D b := (P T I)diag[F bi ] = [p ji F bj ] R Nn2 Nn 2, (28) D := ( P T I ) diag[f i ] = [p ji F j ] R Nn2 total Nn2 total, (29) where I is an identity matrix of appropriate dimension. Recall that P is the transition probability matrix of the Markov chain. The key to establishing the main result of the paper, Theorem 1, is the following technical result. Lemma 1. When the temporal evolution of the graph G(k) is governed by a homogeneous ergodic Markov chain whose transition probability matrix P has the property that its diagonal entries are strictly positive, then ρ(d b ) < 1 if and only if the union graph Ĝ defined in (12) is connected, where D b is defined in (28) and ρ( ) denotes the spectral radius. 18

19 The proof of Lemma 1 requires additional technical results that are presented next. Recall that a non-negative matrix is called stochastic if each row sum is 1. If X is a stochastic matrix, then ρ(x) = 1 [29]. Proposition If X is stochastic matrix, X X is also a stochastic matrix. 2. The matrices J i and F i, i = 1,..., N, defined in (27) are stochastic matrices. 3. Let F 1 F 2 F N F b1 F b2 F bn F 1 F 2 F N F b1 F 2 F bn K :=....., K b := F 1 F 2 F N F b1 F b2 F bn n total N n total N There exists a permutation matrix X so that K b is a principal sub-matrix of X T KX. 4. For the matrix D R Nn2 total defined in (28), we have ρ(d) 1. Nn Nn, Proof of Proposition 1. The first two statements are straightforward to establish. The third statement follows from the fact that J bi is a principal submatrix of J i. We therefore prove only the fourth statement. From (29). ρ(d) = ρ([p ji F j ]). Since p ji F j is a non-negative square matrix, it follows from [30, Theorem 3.2] that ρ([p ji F j ]) ρ([ p ji F j ]). Moreover, p ji F j = p ji F j, and since F j is a stochastic matrix, F j = 1. We therefore have ρ(d) ρ([p ji F j ]) = ρ([p ji ]) = ρ(p T ) = ρ(p) = 1. The proof of the next result is provided in the Appendix since it requires introduction of considerable new terminology. Lemma 2. Let P be the transition probability matrix of an N-state ergodic Markov chain whose diagonal entries are positive. The D defined in (28) is irreducible if and only if the union graph Ĝ defined in (12) is connected. Now we can prove Lemma 1. 19

20 Proof of Lemma 1. Since the union graph Ĝ is connected, it follows from Lemma 2 that D is irreducible. From the third statement of Proposition 1, there exists a permutation matrix X, such that D b is a principal submatrix of X T DX. The spectral radius of an irreducible matrix is strictly greater than the spectral radius of any of its principal submatrices, which follows from Theorem 5.1 in [29]. Therefore we have ρ(d b ) < ρ(x T DX). From the fourth statement in Proposition 1 and the fact that permutation does not change eigenvalues, it follows that ρ(x T DX) = ρ(d) 1. Combining these two inequalities we get that if Ĝ is connected then ρ(d b) < 1. To prove necessity, we construct a counterexample, in particular, a trivial Markov chain with a single state: G = {G 1 } (so that P = 1) where G 1 is an n-node graph without a single edge. Then D b = F b1 = J b1 J b1 = I, which has a spectral radius of unity. This completes the proof of the lemma. The following definitions and terminology from [17] will be needed in the sequel. Let R m n be the space of m n real matrices. Let H m n be the set of all N- sequences of real m n matrices, so that V H m n means V = (V 1, V 2,..., V N ) where V i R m n for i = 1,..., n. The operators ϕ and ˆϕ is defined to create a tall vector by stacking together columns from these matrices, as follows: let (V i ) j R m be the j-th column of V i R m n, then ϕ(v i ) := (V i ) 1. (V i ) n R mn ˆϕ(V ) := ϕ(v 1 ). ϕ(v N ) R Nmn. (30) Similarly, the inverse function ˆϕ 1 : R Nmn H m n is defined so that it produces an element of H m n given a vector in R Nmn. Lemma 3. Consider the jump linear system (24) with an underlying homogeneous and ergodic Markov chain. The state vector e(k) of the system (24) converges in the mean square sense if and only if ρ(d b ) < 1, where D b is defined in (28). When mean square convergence occurs, then µ(k) µ and Q(k) Q, where µ := N q i Q := i=1 N Q i. (31) i=1 20

21 where and ψ, R(q) are given by [q T 1,..., q T N] T = q := (I C) 1 ψ R Nn (Q 1,..., Q N ) = Q := ˆϕ 1 ( (I D b ) 1 ˆϕ(R(q)) ), H n n ψ := [ψ T 1,..., ψ T N] T R Nn, ψ j := N p ij Bbiγπ x i R n i=1 R(q) :=(R 1 (q),..., R N (q)) H n n, N R j (q) := p ij (BbiΓB x bi xt π i + J i q i γ T Bbi xt + Bbiγq x i T Ji T )) R n n. i=1 Moreover, Q is positive semi-definite. Proof. It follows from Theorem 3.33 and Theorem 3.9 of [17] as well as remark 3.5 [17, pg. 35], that mean square convergence of (24) is equivalent to ρ(d b ) < 1 1. The expressions for the mean and correlation, as well as the fact that Q 0, also follow from [17, Proposition 3.37,3.38]. The existence of the steady state distribution π follows from the ergodicity of the Markov chain, so the expressions provided are well defined. Now we are ready to prove Theorem 1 Proof of Theorem 1. (Sufficiency): It follows from Lemma 1 that we have ρ(d b ) < 1. It then follows from Lemma 3 that the state converges in the mean square sense. The statement about the asymptotic mean being unbiased if the measurement noise is zero mean follows immediately from the expression for the liming mean in Lemma 3 by plugging in γ = 0. (Necessity): If the union of graph is not connected, from the proof of Lemma 1, ρ(d b ) = 1. This shows that (due to Lemma 3) convergence will not occur. Remark 1. The equation (24) can be interpreted as a leader-following consensus problem, where the consensus state for node u is the estimation error e u (k) for u V \ V r, while the leader states are e r (k) 0 for r V r, where V r is the set of 1 For the interested reader, the matrix D b is referred to as A 1 in [17]. 21

22 reference nodes. Consensus problem are more interesting when there is no leader. Results from leaderless consensus can be used to prove convergence of consensus with leaders, as remarked in [21]. The papers [21, 22] have analyzed leaderless consensus with Markovian switching graph. Even though the scenarios considered in [21, 22] are close to ours, there are important differences that prevent us from using their results to prove mean square convergence. In [21], communication among nodes is modeled as a directed graph and the noise sequence is modeled as a martingale difference. Mean square convergence is achieved under the condition that certain union of graphs is strongly connected and the weighted digraphs are balanced, which means the weighted adjacency matrix of each graph is symmetric. The results in [22] are similar. The most significant difference between the formulation in [21, 22] and in this paper is that we allow the weights on the edges (u, v) and (v, u) to be different. As a result, in our formulation the weighted adjacency matrix - which is simply N - is not a symmetric matrix. The results in [21, 22] are therefore cannot be directly applied to analyze mean square convergence of (24). 5. Simulation studies As discussed in Section 2, skew and offset estimation are special cases of the problem of estimation of scalar node variables from relative measurements. Therefore simulations are conducted only for scalar node variable estimation. In all simulations, node variables are chosen arbitrarily, a single reference node is present, and the value of the its node variable is 0. The noise on each measurement is a normally distributed random variable. All the edge weights are assigned a value of unity at every time. We conduct simulations for two scenarios: (i) a network with 4 nodes and (ii) a network with 100 nodes. In the first scenario we impose the Markovian switching topology. The limiting means and variances of the estimates are computed from the predictions of Lemma 3, as well as estimated from Monte-Carlo simulations. The predictions of the theory are verified by comparing the two. In the second scenario, we simulate node motion according to the Random Waypoint (RWP) mobility model. The edges for a given set of node locations are determined based on a communication range. As discussed in Section 3.3, in this case Markovian switching is a very good approximation of the graph evolution process even though it may not be exactly Markovian. Even if the Markovian property holds exactly, the transition probability matrix is not known and the large state space makes it infeasible to compute the theoretical predictions of limiting mean and variances that are given in Lemma 3. One purpose of these simulations is therefore to test the performance of the algorithm when theoretical predictions are not available. 22

Distributed Clock Skew and Offset Estimation from Relative Measurements in Mobile Networks with Markovian SwitchingTopology

Distributed Clock Skew and Offset Estimation from Relative Measurements in Mobile Networks with Markovian SwitchingTopology Distributed Clock Skew and Offset Estimation from Relative Measurements in Mobile Networks with Markovian SwitchingTopology Chenda Liao Prabir Barooah Department of Mechanical and Aerospace Engineering

More information

DiSync: Accurate Distributed Clock Synchronization in Mobile Ad-hoc Networks from Noisy Difference Measurements. Chenda Liao and Prabir Barooah

DiSync: Accurate Distributed Clock Synchronization in Mobile Ad-hoc Networks from Noisy Difference Measurements. Chenda Liao and Prabir Barooah DiSync: Accurate Distributed Clock Synchronization in Mobile Ad-hoc Networks from Noisy Difference Measurements Chenda Liao and Prabir Barooah Abstract To perform clock synchronization in mobile adhoc

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Consensus Problems on Small World Graphs: A Structural Study

Consensus Problems on Small World Graphs: A Structural Study Consensus Problems on Small World Graphs: A Structural Study Pedram Hovareshti and John S. Baras 1 Department of Electrical and Computer Engineering and the Institute for Systems Research, University of

More information

Inference in Bayesian Networks

Inference in Bayesian Networks Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)

More information

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control Outline Background Preliminaries Consensus Numerical simulations Conclusions Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control Email: lzhx@nankai.edu.cn, chenzq@nankai.edu.cn

More information

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS Giorgio Picci with T. Taylor, ASU Tempe AZ. Wofgang Runggaldier s Birthday, Brixen July 2007 1 CONSENSUS FOR RANDOM GOSSIP ALGORITHMS Consider a finite

More information

Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events

Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events M. Cao Yale Univesity A. S. Morse Yale University B. D. O. Anderson Australia

More information

Asymptotics, asynchrony, and asymmetry in distributed consensus

Asymptotics, asynchrony, and asymmetry in distributed consensus DANCES Seminar 1 / Asymptotics, asynchrony, and asymmetry in distributed consensus Anand D. Information Theory and Applications Center University of California, San Diego 9 March 011 Joint work with Alex

More information

Modeling and Stability Analysis of a Communication Network System

Modeling and Stability Analysis of a Communication Network System Modeling and Stability Analysis of a Communication Network System Zvi Retchkiman Königsberg Instituto Politecnico Nacional e-mail: mzvi@cic.ipn.mx Abstract In this work, the modeling and stability problem

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

An Algorithm for Accurate Distributed Time Synchronization in Mobile Wireless Sensor Networks from Noisy Difference Measurements

An Algorithm for Accurate Distributed Time Synchronization in Mobile Wireless Sensor Networks from Noisy Difference Measurements Asian Journal of Control, Vol. 00, No. 0, pp. 1 13, Month 2008 Pulished online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/asjc.0000 An Algorithm for Accurate Distriuted Time Synchronization

More information

6 Evolution of Networks

6 Evolution of Networks last revised: March 2008 WARNING for Soc 376 students: This draft adopts the demography convention for transition matrices (i.e., transitions from column to row). 6 Evolution of Networks 6. Strategic network

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

On Backward Product of Stochastic Matrices

On Backward Product of Stochastic Matrices On Backward Product of Stochastic Matrices Behrouz Touri and Angelia Nedić 1 Abstract We study the ergodicity of backward product of stochastic and doubly stochastic matrices by introducing the concept

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 4: October 21, 2003 Bounding the mixing time via coupling Eric Vigoda 4.1 Introduction In this lecture we ll use the

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Time Synchronization in WSNs: A Maximum Value Based Consensus Approach

Time Synchronization in WSNs: A Maximum Value Based Consensus Approach 211 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 12-15, 211 Time Synchronization in WSNs: A Maximum Value Based Consensus Approach Jianping

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Algebraic gossip on Arbitrary Networks

Algebraic gossip on Arbitrary Networks Algebraic gossip on Arbitrary etworks Dinkar Vasudevan and Shrinivas Kudekar School of Computer and Communication Sciences, EPFL, Lausanne. Email: {dinkar.vasudevan,shrinivas.kudekar}@epfl.ch arxiv:090.444v

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Time Synchronization

Time Synchronization Massachusetts Institute of Technology Lecture 7 6.895: Advanced Distributed Algorithms March 6, 2006 Professor Nancy Lynch Time Synchronization Readings: Fan, Lynch. Gradient clock synchronization Attiya,

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

Distributed Optimization over Random Networks

Distributed Optimization over Random Networks Distributed Optimization over Random Networks Ilan Lobel and Asu Ozdaglar Allerton Conference September 2008 Operations Research Center and Electrical Engineering & Computer Science Massachusetts Institute

More information

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Jean-Claude Bermond a,, Bi Li a,b, Nicolas Nisse a, Hervé Rivano c, Min-Li Yu d a Coati Project, INRIA I3S(CNRS/UNSA), Sophia

More information

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors Ali Jadbabaie Department of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 jadbabai@seas.upenn.edu

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

Almost Sure Convergence to Consensus in Markovian Random Graphs

Almost Sure Convergence to Consensus in Markovian Random Graphs Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 2008 Almost Sure Convergence to Consensus in Markovian Random Graphs Ion Matei, Nuno Martins and John S. Baras

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks

Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks Asymmetric Information Diffusion via Gossiping on Static And Dynamic Networks Ercan Yildiz 1, Anna Scaglione 2, Asuman Ozdaglar 3 Cornell University, UC Davis, MIT Abstract In this paper we consider the

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Discrete-time Consensus Filters on Directed Switching Graphs

Discrete-time Consensus Filters on Directed Switching Graphs 214 11th IEEE International Conference on Control & Automation (ICCA) June 18-2, 214. Taichung, Taiwan Discrete-time Consensus Filters on Directed Switching Graphs Shuai Li and Yi Guo Abstract We consider

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics EECS 281A / STAT 241A Statistical Learning Theory Solutions to Problem Set 2 Fall 2011 Issued: Wednesday,

More information

Randomized Gossip Algorithms

Randomized Gossip Algorithms 2508 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 6, JUNE 2006 Randomized Gossip Algorithms Stephen Boyd, Fellow, IEEE, Arpita Ghosh, Student Member, IEEE, Balaji Prabhakar, Member, IEEE, and Devavrat

More information

Lecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure

Lecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure Stat260/CS294: Spectral Graph Methods Lecture 19-04/02/2015 Lecture: Local Spectral Methods (2 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Gossip algorithms for solving Laplacian systems

Gossip algorithms for solving Laplacian systems Gossip algorithms for solving Laplacian systems Anastasios Zouzias University of Toronto joint work with Nikolaos Freris (EPFL) Based on : 1.Fast Distributed Smoothing for Clock Synchronization (CDC 1).Randomized

More information

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

A Distributed Newton Method for Network Utility Maximization, I: Algorithm A Distributed Newton Method for Networ Utility Maximization, I: Algorithm Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract Most existing wors use dual decomposition and first-order

More information

arxiv: v1 [math.oc] 24 Oct 2014

arxiv: v1 [math.oc] 24 Oct 2014 In-Network Leader Selection for Acyclic Graphs Stacy Patterson arxiv:1410.6533v1 [math.oc] 24 Oct 2014 Abstract We study the problem of leader selection in leaderfollower multi-agent systems that are subject

More information

Stochastic optimization Markov Chain Monte Carlo

Stochastic optimization Markov Chain Monte Carlo Stochastic optimization Markov Chain Monte Carlo Ethan Fetaya Weizmann Institute of Science 1 Motivation Markov chains Stationary distribution Mixing time 2 Algorithms Metropolis-Hastings Simulated Annealing

More information

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum

Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum Themistoklis Charalambous, Member, IEEE, Michael G. Rabbat, Member, IEEE, Mikael Johansson, Member,

More information

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Husheng Li 1 and Huaiyu Dai 2 1 Department of Electrical Engineering and Computer

More information

A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology

A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology Amir Amini, Amir Asif, Arash Mohammadi Electrical and Computer Engineering,, Montreal, Canada.

More information

Fast Linear Iterations for Distributed Averaging 1

Fast Linear Iterations for Distributed Averaging 1 Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Communication constraints and latency in Networked Control Systems

Communication constraints and latency in Networked Control Systems Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega

More information

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach M. Cao Yale Univesity A. S. Morse Yale University B. D. O. Anderson Australia National University and National ICT Australia

More information

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Cameron Nowzari Jorge Cortés Abstract This paper studies a distributed event-triggered communication and

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

Computing and Communicating Functions over Sensor Networks

Computing and Communicating Functions over Sensor Networks Computing and Communicating Functions over Sensor Networks Solmaz Torabi Dept. of Electrical and Computer Engineering Drexel University solmaz.t@drexel.edu Advisor: Dr. John M. Walsh 1/35 1 Refrences [1]

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS. Michael G. Rabbat

ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS. Michael G. Rabbat ON SPATIAL GOSSIP ALGORITHMS FOR AVERAGE CONSENSUS Michael G. Rabbat Dept. of Electrical and Computer Engineering McGill University, Montréal, Québec Email: michael.rabbat@mcgill.ca ABSTRACT This paper

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

On Finding Optimal Policies for Markovian Decision Processes Using Simulation

On Finding Optimal Policies for Markovian Decision Processes Using Simulation On Finding Optimal Policies for Markovian Decision Processes Using Simulation Apostolos N. Burnetas Case Western Reserve University Michael N. Katehakis Rutgers University February 1995 Abstract A simulation

More information

A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents

A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents Sandip Roy Kristin Herlugson Ali Saberi April 11, 2005 Abstract We address the problem of global

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Consensus of Information Under Dynamically Changing Interaction Topologies

Consensus of Information Under Dynamically Changing Interaction Topologies Consensus of Information Under Dynamically Changing Interaction Topologies Wei Ren and Randal W. Beard Abstract This paper considers the problem of information consensus among multiple agents in the presence

More information

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours. UNIVERSITY OF YORK MSc Examinations 2004 MATHEMATICS Networks Time Allowed: 3 hours. Answer 4 questions. Standard calculators will be provided but should be unnecessary. 1 Turn over 2 continued on next

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs Saeed Ahmadizadeh a, Iman Shames a, Samuel Martin b, Dragan Nešić a a Department of Electrical and Electronic Engineering, Melbourne

More information

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies IEEE TRANSACTIONS ON AUTOMATIC CONTROL, SUBMITTED FOR PUBLICATION AS A TECHNICAL NOTE. 1 Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies Wei Ren, Student Member,

More information

Distributed Clock Synchronization over Wireless Networks: Algorithms and Analysis

Distributed Clock Synchronization over Wireless Networks: Algorithms and Analysis Distributed Clock Synchronization over Wireless Networks: Algorithms and Analysis Arvind Giridhar and P. R. Kumar Abstract We analyze the spatial smoothing algorithm of Solis, Borkar and Kumar [1] for

More information

Reverse Edge Cut-Set Bounds for Secure Network Coding

Reverse Edge Cut-Set Bounds for Secure Network Coding Reverse Edge Cut-Set Bounds for Secure Network Coding Wentao Huang and Tracey Ho California Institute of Technology Michael Langberg University at Buffalo, SUNY Joerg Kliewer New Jersey Institute of Technology

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

On Spectral Properties of the Grounded Laplacian Matrix

On Spectral Properties of the Grounded Laplacian Matrix On Spectral Properties of the Grounded Laplacian Matrix by Mohammad Pirani A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Science

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Networked Sensing, Estimation and Control Systems

Networked Sensing, Estimation and Control Systems Networked Sensing, Estimation and Control Systems Vijay Gupta University of Notre Dame Richard M. Murray California Institute of echnology Ling Shi Hong Kong University of Science and echnology Bruno Sinopoli

More information

Robust Computation of Aggregates in Wireless Sensor Networks: Distributed Randomized Algorithms and Analysis

Robust Computation of Aggregates in Wireless Sensor Networks: Distributed Randomized Algorithms and Analysis Robust Computation of Aggregates in Wireless Sensor Networks: Distributed Randomized Algorithms and Analysis Jen-Yeu Chen, Gopal Pandurangan, Dongyan Xu Abstract A wireless sensor network consists of a

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Distributed Algorithms for Consensus and Coordination in the Presence of Packet-Dropping Communication Links

Distributed Algorithms for Consensus and Coordination in the Presence of Packet-Dropping Communication Links COORDINATED SCIENCE LABORATORY TECHNICAL REPORT UILU-ENG-11-2208 (CRHC-11-06) 1 Distributed Algorithms for Consensus and Coordination in the Presence of Packet-Dropping Communication Links Part II: Coefficients

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

E-Companion to The Evolution of Beliefs over Signed Social Networks

E-Companion to The Evolution of Beliefs over Signed Social Networks OPERATIONS RESEARCH INFORMS E-Companion to The Evolution of Beliefs over Signed Social Networks Guodong Shi Research School of Engineering, CECS, The Australian National University, Canberra ACT 000, Australia

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Combinatorial semigroups and induced/deduced operators

Combinatorial semigroups and induced/deduced operators Combinatorial semigroups and induced/deduced operators G. Stacey Staples Department of Mathematics and Statistics Southern Illinois University Edwardsville Modified Hypercubes Particular groups & semigroups

More information

3. The Voter Model. David Aldous. June 20, 2012

3. The Voter Model. David Aldous. June 20, 2012 3. The Voter Model David Aldous June 20, 2012 We now move on to the voter model, which (compared to the averaging model) has a more substantial literature in the finite setting, so what s written here

More information

Stability Analysis of Stochastically Varying Formations of Dynamic Agents

Stability Analysis of Stochastically Varying Formations of Dynamic Agents Stability Analysis of Stochastically Varying Formations of Dynamic Agents Vijay Gupta, Babak Hassibi and Richard M. Murray Division of Engineering and Applied Science California Institute of Technology

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

Walks, Springs, and Resistor Networks

Walks, Springs, and Resistor Networks Spectral Graph Theory Lecture 12 Walks, Springs, and Resistor Networks Daniel A. Spielman October 8, 2018 12.1 Overview In this lecture we will see how the analysis of random walks, spring networks, and

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Jean-Claude Bermond a,b,, Bi Li b,a,c, Nicolas Nisse b,a, Hervé Rivano d, Min-Li Yu e a Univ. Nice Sophia Antipolis, CNRS,

More information

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, Shaochuan Wu and Michael G. Rabbat. A. Related Work

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, Shaochuan Wu and Michael G. Rabbat. A. Related Work IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 16, AUGUST 15, 2013 3959 Broadcast Gossip Algorithms for Consensus on Strongly Connected Digraphs Shaochuan Wu and Michael G. Rabbat Abstract We study

More information