Hegselmann-Krause Dynamics: An Upper Bound on Termination Time

Similar documents
Multi-Dimensional Hegselmann-Krause Dynamics

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

On Backward Product of Stochastic Matrices

Symmetric Continuum Opinion Dynamics: Convergence, but sometimes only in Distribution

Convergence Rate for Consensus with Delays

On Opinion Dynamics in Heterogeneous Networks

Degree Fluctuations and the Convergence Time of Consensus Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms

Opinion Dynamics of Modified Hegselmann-Krause Model with Group-based Bounded Confidence

Convergence of Rule-of-Thumb Learning Rules in Social Networks

On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity

Consensus, Flocking and Opinion Dynamics

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach

Product of Random Stochastic Matrices

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Clustering and asymptotic behavior in opinion formation

On the Dynamics of Influence Networks via Reflected Appraisal

arxiv: v1 [math.oc] 24 Dec 2018

Topics in Social Networks: Opinion Dynamics and Control

Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events

On Distributed Averaging Algorithms and Quantization Effects

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity

Convergence in Multiagent Coordination, Consensus, and Flocking

CRITICALITY ASSESSMENT VIA OPINION DYNAMICS

Inertial Hegselmann-Krause Systems

Distributed Consensus over Network with Noisy Links

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

Almost Sure Convergence to Consensus in Markovian Random Graphs

Opinion Dynamics with Local Interactions

Consensus-Based Distributed Optimization with Malicious Nodes

Learning distributions and hypothesis testing via social learning

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Constrained Consensus and Optimization in Multi-Agent Networks

Boolean Inner-Product Spaces and Boolean Matrices

Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules

Consensus of Hybrid Multi-agent Systems

Push-sum Algorithm on Time-varying Random Graphs

Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren, Member, IEEE

Convergence in Multiagent Coordination, Consensus, and Flocking

MULTIPLICITIES OF MONOMIAL IDEALS

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

On Quantized Consensus by Means of Gossip Algorithm Part I: Convergence Proof

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

E-Companion to The Evolution of Beliefs over Signed Social Networks

Discrete Applied Mathematics

Finite-Field Consensus

A Gossip Algorithm for Aggregative Games on Graphs

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

ACI-matrices all of whose completions have the same rank

Fast Linear Iterations for Distributed Averaging 1

Consensus Problems on Small World Graphs: A Structural Study

Noisy Hegselmann-Krause Systems: Phase Transition and the 2R-Conjecture

arxiv:quant-ph/ v1 22 Aug 2005

arxiv: v1 [math.oc] 22 Jan 2008

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY

Opinion Dynamics on Triad Scale Free Network

Theory and Applications of Matrix-Weighted Consensus

The Chromatic Number of Ordered Graphs With Constrained Conflict Graphs

Dynamic Coalitional TU Games: Distributed Bargaining among Players Neighbors

r-robustness and (r, s)-robustness of Circulant Graphs

Shortest paths with negative lengths

An Unbiased Kalman Consensus Algorithm

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

arxiv: v1 [cs.si] 26 May 2016

Section Summary. Sequences. Recurrence Relations. Summations. Examples: Geometric Progression, Arithmetic Progression. Example: Fibonacci Sequence

Tracking control for multi-agent consensus with an active leader and variable topology

Graph fundamentals. Matrices associated with a graph

Intrinsic products and factorizations of matrices

Notes on averaging over acyclic digraphs and discrete coverage control

Math 775 Homework 1. Austin Mohr. February 9, 2011

Lecture 3: graph theory

CHAPTER 10 Shape Preserving Properties of B-splines

University of Groningen. Agreeing asynchronously Cao, Ming; Morse, A. Stephen; Anderson, Brian D. O.

The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case

Near-Potential Games: Geometry and Dynamics

On the order bound of one-point algebraic geometry codes.

Asymptotic redundancy and prolixity

About the power to enforce and prevent consensus

arxiv: v2 [physics.soc-ph] 10 Jan 2016

Robust Connectivity Analysis for Multi-Agent Systems

MAT 242 CHAPTER 4: SUBSPACES OF R n

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Matrices and Vectors

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

2 Notation and Preliminaries

An Optimal Control Problem Over Infected Networks

Max-Consensus in a Max-Plus Algebraic Setting: The Case of Fixed Communication Topologies

On minors of the compound matrix of a Laplacian

Lecture 7. Econ August 18

Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D

Scaling the Size of a Multiagent Formation via Distributed Feedback

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Optimal Control of Switching Surfaces

Notes on averaging over acyclic digraphs and discrete coverage control

Asynchronous Implementation of Distributed Coordination Algorithms Chen, Yao; Xia, Weiguo; Cao, Ming; Lu, Jinhu

On Quantized Consensus by Means of Gossip Algorithm Part II: Convergence Time

The Multi-Agent Rendezvous Problem - The Asynchronous Case

Transcription:

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time B. Touri Coordinated Science Laboratory University of Illinois Urbana, IL 680 touri@illinois.edu A. Nedić Industrial and Enterprise Systems Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu Abstract We discuss the Hegselmann-Krause model for opinion dynamics in discrete-time for a collection of homogeneous agents. Each agent opinion is modeled by a scalar variable, and the neighbors of an agent are defined by a symmetric confidence level. We provide an upper bound for the termination time of the dynamics by using a Lyapunov type approach. Specifically, through the use of a properly defined adjoint dynamics, we construct a Lyapunov comparison function that decreases along the trajectory of the opinion dynamics. Using this function, we develop a novel upper bound on the termination time that has order m in terms of the spread in the initial opinion profile. I. INTRODUCTION Recently, mathematical modeling of social networks has gained a lot of attention and several models for opinion dynamics have been proposed and studied (see for example [7], [0], [4]). These models have also been used in distributed engineered systems to capture the dynamic interactions among the autonomous agents such as in robotic networks [3]. One of the most popular models for opinion dynamics is the Hegselmann-Krause proposed by R. Hegselmann and U. Krause in [7] and its gossip-type variant [5], commonly referred to as Deffuant-Weisbuch model. In this paper we consider the Hegselmann-Krause model for a collection of agents with the same (symmetric) confidence interval. Our interest is in a discrete-time scalar model where the agents opinions are modeled by scalar variables. The stability and convergence of the Hegselmann-Krause dynamics and its generalizations have been shown in [9], [0], and []. Also, several generalizations of this dynamics for continuous time and time-varying networks have been proposed [6], [], [], [8]. Among many issues that have remained opened, even for the original Hegselmann-Krause dynamics, is the provision of bounds on the termination time of the dynamics, which is the time required for the dynamics to reach its steady state. While it is known [] that the termination time is finite, the bounds on this time have not been fully investigated. In [3], it is shown that the termination time of the dynamics is at least in the order of m and at most in the order of m 5, where m is the number of agents in the model. In this work, we further investigate the termination time and provide an upper This research is supported by NSF Award CMMI-074538. bound which if of order m in terms of the initial opinion profile of the agents. The structure of the paper is as follows: in Section II we discuss the Hegselmann-Krause dynamics as appeared in [7], and provide our notation. Then, in Section III, we introduce the concept of adjoint dynamics which plays a central role in our analysis. Based on the properties of the dynamics and the use of Lyapunov comparison function, we develop an upper bound for the convergence time in Section IV. We conclude our discussion in Section V. II. HEGSELMANN-KRAUSE OPINION DYNAMICS Here, we discuss the Hegselmann-Krause opinion dynamics model [7], where m agents interact in time. We use,,, m to index the agents and let [m] = {,..., m}. The interactions occur at time instances indexed by nonnegative integers t = 0,,,.... At each time t, agent i has an opinion that is represented by a scalar x i (t) R. The collection of profiles {x i (t) i [m]} is termed the opinion profile at time t. Initially, each agent i [m] has an opinion x i (0) R. At each time t, the agents interact with their neighbors and update their opinions, where the neighbors are determined based on the difference of the opinions and a prescribed maximum difference ɛ > 0. The scalar ɛ is termed confidence, which limits the agents interactions in time. Specifically, the set of neighbors of agent i at time t, denoted by N i (t), is given as follows: N i (t) = {j [m] x i (t) x j (t) ɛ}. The opinion profile evolves in time according to the following dynamics: x i (t + ) = N i (t) j N i(t) x j (t) for t 0, () where N i (t) denotes the cardinality of the set N i (t). Note that the opinion dynamics is completely determined by the confidence value ɛ and the initial profile {x i (0) i [m]}. First, to compactly represent the dynamics, we define the matrix A(t) with entries given by { A ij (t) = N i(t) if j N i (t), 0 otherwise.

The dynamics () can now be written as follows: x(t + ) = A(t)x(t), () where x(t) is the column vector with entries x i (t), i [m]. It is well known that Hegselmann-Krause opinion dynamics is stable and converges to a limit point in a finite time [], [7]; see also [9], [0], [3]. A lower bound and an upper bound for the termination time have been discussed in [3], where it has been demonstrated that a lower bound is at least of the order of the number m of agents, and the upper bound is of the order at most m 5. In this paper, we are interested in improving the upper bound. Few words on our notation are in place. We view vectors as columns, and use x to denote the transpose of a vector x and A for the transpose of a matrix A. We write e i to denote the unit vector with ith coordinate equal to, and e to denote a vector with all entries equal to. For a vector x, we use x i or [x] i to denote its ith coordinate value. A vector v is stochastic if v i 0 and i v i =. Given a vector v R m, diag(v) denotes the m m diagonal matrix with v i, i [m], on its diagonal. For a matrix A, we write A ij or [A] ij to denote its ijth entry. A matrix A is stochastic if Ae = e and A ij 0 for all i, j. We often abbreviate double summation m m i= j= with m. We write S to denote the cardinality of a set S with finitely many elements. III. ADJOINT DYNAMICS We next discuss the adjoint dynamics for the Hegselmann- Krause dynamics. It has been shown in [] that, for the Hegselmann-Krause dynamics, there exists a sequence of stochastic vectors {π(t)} such that π (t) = π (t + )A(t) for all t 0. (3) This dynamics is the adjoint for the original dynamics (). Note that the adjoint dynamics evolves backward in time. It can be seen that the adjoint dynamics for the Hegselmann-Krause model is not necessarily unique. However, some properties characterize all of possible adjoint dynamics. In particular, one of the properties that is important in the further development is captured by the following relation: π (t + )x(t + ) = π (t)x(t) for all t 0, (4) which is valid for the dynamics {x(t)} and any of its adjoint dynamics {π(t)}. Relation (4) is obtained directly from the definitions of the Hegselmann-Krause dynamics in () and its adjoint dynamics in (3). Now, we construct an adjoint dynamics {π(t)} that is particularly useful for our development of an upper bound on the termination time. We let T be the termination time of the dynamics {x(t)}, which is the instance when all agents reach their corresponding steady-state opinions. Formally, the termination time T of the dynamics in () is the smallest t 0 such that x i (k + ) = x i (k) for all k t and for all i [m], i.e., T = min t 0 {t x i(k+) = x i (k) for all i [m] and all k t}. As shown in [], the termination time T is finite. We can define the termination time of the dynamics for any agent i [m]. For this, at time k 0, let the connectivity graph be the graph on the vertex set [m] and the edge set E(k) = {{i, j} i, j [m], x i (k) x j (k) ɛ}. For i [m], we let C i (k) be the connected component that contains agent i, i.e. the subset of agents that there is a path between i and any j C i (k). Based on this, we define T i = min t 0 {x i(t) = x j (t) for all j C i (t)}. In other words T i is the first time that all the agents in the connected component containing the ith agent reach an agreement. It can be shown that in fact T = max i [m] T i. Let S [m] be the index set of the agents whose termination time is the same as T, i.e., S = {i [m] T i = T }. Now, we define a special adjoint dynamics. Let ˆπ(T ) be given by ˆπ i (T ) = S for i S, ˆπ i (T ) = 0 for i S. (5) Consider the adjoint process {ˆπ(t)} defined by ˆπ (t) = ˆπ (t + )A(t) for t = 0,..., T, ˆπ(t) = ˆπ(t + ) for t T. (6) Basically, the adjoint dynamics {ˆπ(t)} is static after time T, while for times t with 0 t T, it is constructed backward in time starting with ˆπ(T ) as an initial vector. We note that the vectors ˆπ(t) are stochastic since ˆπ(T ) is a probability vector and each A(t) is a stochastic matrix. We have the following result for {ˆπ(t)}, which will be useful in the further development. Lemma : Consider the adjoint dynamics in (5) (6). For every agent i [m] with T i < T the following holds: ˆπ i (t) = 0 for all t T i. Proof: Let i be an agent whose termination time T i is such that T i < T. Since the dynamics for this agent terminates at time T i, his neighbor set must have stabilized, i.e., we must have N i (t) = N i (T i ) for all t T i. For convenience let us denote the set N i (T i ) by S. Then, the dynamics of all agents j S also terminates at time T i, and N j (t) = S for all t T i and all j S. (7) Since T i < T, it follows that N j (T ) = S for all j S. Furthermore, we must have S S = by the definition of the set S. Then, by the definition of the vector ˆπ(T ) in (5), it follows that ˆπ j (T ) = 0 for all j S. (8)

Now, since the adjoint dynamics ˆπ(t) is static after time T, we immediately have ˆπ j (t) = 0 for all j S and all t T. We now show that ˆπ j (t) = 0 for all j S and for all times t with T i t < T. We prove this by induction on t. For t = T, the relation holds as seen from (8). Assume now that, the relation holds for some t + with T i t + T, and consider the preceding time t. By the definition of ˆπ(t) in (6), we have for any j S ˆπ j (t) = l N j(t) ˆπ l (t + ) N l (t) = l S ˆπ l (t + ) S = 0, where the second equality follows from (7), while the last equality follows by the induction hypothesis. Thus, we have ˆπ j (t) = 0 for all j S and all t T i. The stated result follows since i S. The following result will also be used in the further development. The result provides a necessary and sufficient condition for an agreement of two agents. Lemma : Let x(0) R m be ordered so that x (0) x (0) x m (0). Then, we have x i (t+) = x i+ (t+) for some t and i m if and only if N i (t) = N i+ (t). Proof: The if part of the statement follows immediately from the definition of the dynamics in (). To show the only if part, let the average of the opinions {x i (t) p i q}, with p q, be denoted by a q p(t), i.e., a q q p(t) = x i (t). q p + i=p Then, for p q < m, we have a q p(t) x q+ (t) and hence: a q p(t) = ( q p + q p + + q p + )aq p(t) q p + q p + aq p(t) + q p + x q+(t) = q p + q x i (t) + q p + q p + = a q+ p (t). i=p q p + x q+(t) Thus a q p(t) a q+ p (t). Further, note that if x q (t) < x q+ (t), then a q p(t) < a q+ p (t). Similarly, we have a q p (t) aq p(t) for < p q m, and if x p (t) < x p (t), then a q p (t) < a q p(t). Putting these relations together, we obtain for p q, a q p (t) aq p(t) a q+ p (t), (9) where each of the inequalities is strict if in the profile ordering x p (t) x p (t) x p+ (t) the corresponding inequality is strict. Now, let x i (t+) = x i+ (t+). Note that in this case we must have N i (t) N i+ (t) ; for otherwise we would have x i (t+) x i (t) < x i+ (t) x i+ (t+), thus contradicting x i (t+) = x i+ (t+). Let us assume that x i (t+) = a q p (t) for some p q and x i+ (t+) = a q p (t) for some p q. Since N i (t) N i+ (t), we must have p q, implying p p q q. Thus, using the inequalities in (9), we obtain x i (t + ) = a q p (t) a q p (t) a q p (t) = x i+ (t + ). Note that if N i (t) N i+ (t), then at least one of the above inequalities is strict, because in this case either x p (t) < x p (t) or x q (t) < x q (t). Thus, the claim follows. We next establish an important property of the adjoint dynamics {ˆπ(t)} for later development. The result makes use of Lemmas and. From now on, we assume that T. Lemma 3: Let {ˆπ(t)} be the adjoint dynamics as given in (6), and consider a time t 0 such that t < T. Then, for any i [m] with ˆπ i (t + ) > 0, there exists j N i (t) such that ˆπ j (t + ) ˆπ i(t + ) and N j (t) N i (t). Proof: Without loss of generality, we assume that the ordering of the initial profile is x (0) x (0) x m (0), for otherwise we can re-label the agents. Let us define the closest upper and lower neighbors of an agent i [m], respectively, as follows ī(t) = min{j N i (t) x i (t) < x j (t)}, i(t) = max{l N i (t) x l (t) < x i (t)}. Note that the closest upper (lower) neighbor of an agent i [m] may not exist. Define the following subsets of N i (t+): N + i (t + ) = {j N i(t + ) x i (t + ) x j (t + )}, N i (t + ) = {l N i(t + ) x l (t + ) x i (t + )}. By the definition of the adjoint dynamics in (6), we have for t < T, ˆπ i (t + ) = ˆπ j (t + )A ji (t + ) j N i(t+) ˆπ j (t + )A ji (t + ) j N i (t+) + ˆπ j (t + )A ji (t + ), j N + i (t+) where the inequality comes from N i (t + ) N + i (t + ) N i (t + ) which holds because i is included in N i (t + ) but not in the other two sets. Thus, we conclude that either ˆπ i (t + ) ˆπ j (t + )A ji (t + ), j N i (t+) or ˆπ i (t + ) j N + i (t+) ˆπ j (t + )A ji (t + ). Without loss of generality assume that ˆπ i (t + ) ˆπ j (t + )A ji (t + ). j N + i (t+) Now, let i be such that ˆπ i (t + ) > 0 and consider the following two cases: Case : ī(t + ) exists. Then, we must have i m.

Furthermore, we have ī(t+) N + i (t+) and N + i (t+) N ī(t+) (t + ). Therefore, ˆπ ī(t+) (t + ) = ˆπ j (t + )A jī (t + ) j N ī (t+) ˆπ j (t + )A jī (t + ) j N + i (t+) = ˆπ j (t + )A ji (t + ) j N + i (t+) ˆπ i(t + ), where the first inequality follows from the fact that the positive entries in each row of A(t+) are identical. The last inequality follows from the definition of the adjoint dynamics in (6). We next show that ī(t + ) N i (t). To arrive at a contradiction, assume that ī(t + ) N i (t). Since ī(t) N i (t), it follows that ī(t) < ī(t + ). Thus, there must be at least one agent j whose value x j (t) is between the values x i (t) and x ī(t+) (t). Assume that there are τ agents between i and ī(t + ), i.e., we have x i (t) x i+ (t) x i+τ (t) < x ī(t+) (t), (0) where the last inequality is strict by ī(t + ) N i (t). At time t +, the same order as in (0) is preserved x i (t+) x i+ (t+) x i+τ (t+) x ī(t+) (t+). By the definition of ī(t + ) (as the closest neighbor with a larger opinion value than the value of agent i), it follows that x i (t+) = x i+ (t+) = = x i+τ (t+) < x ī(t+) (t+). According to Lemma, the profiles are equal at time t + if and only if N i (t) = N i+ (t) = = N i+τ (t). Since ī(t+) N i (t), it follows that (at time t) the agents in the set {i, i +,..., i + τ} are connected, but all of them are disconnected from agent ī(t + ). As a consequence, i and ī(t + ) cannot be neighbors at time t +, which is a contradiction. Hence, we must have ī(t + ) N i (t). Case : ī(k+) does not exist. Then, for any j N + i (t+), we have x j (t+) = x i (t+). Thus, N + i (t+) N i (t+) and hence, N i (t+) = N i (t+). If i(t+) does not exist, then x l (t+) = x i (t+) for all l N i (t). Hence, it follows that t+ is the termination time for agent i. Since t+ < T, by Lemma it follows that ˆπ i (t + ) = 0, which contradicts the assumption ˆπ i (t + ) = 0. Therefore, i(t + ) must exist. Since N i (t + ) N i(t+)(t + ), using the same line of argument as in the preceding case, we can show that ˆπ i(t+) ˆπ i(t + ), N i(t+) (t) N i (t), implying that the assertion holds for j = i(t + ). IV. UPPER BOUND FOR TERMINATION TIME In this section, we establish an improved upper bound for the termination time of the Hegselmann-Krause dynamics. Our analysis uses a Lyapunov comparison function that is constructed by using the adjoint dynamics in (5) (6). However, we start by discussing a Lyapunov function defined by a (generic) adjoint dynamics in (3). The comparison function is V π (x, t) defined by: for all x R n and t 0, V π (x, t) = π i (t)(x i π (t)x), () i= where {π(t)} is the adjoint dynamics in (3). This function has been proposed and investigated in []. In particular, the decrease of this comparison function along trajectories of the Hegselmann-Krause dynamics {x(t)} and its adjoint dynamics is of particular importance in our analysis. We use the following result, as shown in [], Theorem 4.3. Theorem : For any t 0, we have V π (x(t + ), t + ) = V π (x(t), t) H ij (t)(x i (t) x j (t)), where H(t) = A (t) diag(π(t + ))A(t). As an immediate consequence of Theorem and the definition of the adjoint dynamics in (3), by summing both sides of the asserted relation for t = 0,..., τ, we have for all τ, 0 τ T, V π (x(τ + ), τ + ) = V π (x(0), 0) τ H ij (t)(x i (t) x j (t)). () t=0 We next derive a lower bound for the summands that appear with the negative sign in relation (). We do so through the use of two auxiliary relations. These relations are valid for any adjoint dynamics. The first relation gives us a lower bound in terms of the maximal opinion spread in the neighborhood for every agent. Specifically, for every l [m] and every t 0, let d l (t) = max x i(t) min x j(t). i N l (t) j N l (t) Thus, d l (t) measures the opinion spread that agent l observes at time t. We have the following result. Lemma 4: For any adjoint dynamics {π(t)} of (3) we have for all t 0, H ij (t)(x i (t) x j (t)) 4m π l (t + ) d l(t). Proof: Since H(t) = A (t) diag(π(t + ))A(t), it follows that H ij (t) = m π l(t + )A li (t)a lj (t). Therefore, by exchanging the order of summation we obtain H ij (t) = π l (t + ) A li (t)a lj (t).

Using the definition of the matrix A(t), we obtain { A li (t)a lj (t) = N l (t) if i, j N l (t), 0 otherwise, thus implying H ij (t)(x i (t) x j (t)) π l (t + ) N l (t) i,j N l (t) We next show that for all t 0, i,j N l (t) (x i (t) x j (t)). (3) (x i (t) x j (t)) 4 N l(t) d l(t). If d l (t) = 0, then the assertion follows immediately. So, suppose that d l (t) 0. Let us define lb = argmin{x i (t) i N l (t)} and ub = argmax{x j (t) j N l (t)}. In other words, lb and ub are the agents with the smallest and largest opinion values that agent l observes in his neighborhood. Therefore, d l (t) = x ub (t) x lb (t) and since d l (t) 0, we have lb ub. Letting i,j N l (t) denote the double summation i N l (t) j N l (t), we have (x i (t) x j (t)) (x ub (t) x j (t)) i,j N l (t) + i N l (t) = j N l (t) j N l (t) (x i (t) x lb (t)) j N l (t) { (xub (t) x j (t)) + (x j (t) x lb (t)) } 4 (x ub(t) x lb (t)) = 4 N l(t) d l(t). (4) The last inequality follows from (x ub (t) x j (t)) + (x j (t) x lb (t)) 4 (x ub(t) x lb (t)), which holds since the function s (a s) + (s b) attains its minimum at s = a+b. By combining relations (3) and (4), we obtain H ij (t)(x i (t) x j (t)) 4 π l (t + ) N l (t) d l(t). The desired relation follows by N i (t) m for all t. Now, we focus a relation that is valid for the special adjoint dynamics of (5) (6). Specifically, we estimate the sum m ˆπ l(t + ) d l (t). Lemma 5: For the dynamics of (5) (6) we have for all 0 t < T, ˆπ l (t + ) d l(t) ɛ 4m. Proof: Let l = argmax l [m]ˆπ l (t + ). Since, ˆπ(t + ) is a stochastic vector, it follows that ˆπ l (t+) m. Suppose that d l (t) > ɛ. In this case, it immediately follows ˆπ l (t + ) d l(t) ˆπ l (t + ) d l (t) > ɛ m 4, thus showing the desired relation. Suppose now that d l (t) ɛ. Since t < T, by Lemma 3, there exists j N l (t) such that ˆπ j (t + ) ˆπ l (t + ) m. Moreover, N j (t) N l (t). Next, we prove that d j (t) ɛ. Since d l (t) ɛ and j N l (t), it follows that x l (t) x j (t) ɛ. Also, for any i N l (t), we have: x i (t) x j (t) x i (t) x l (t) + x l (t) x j (t) ɛ, implying i N j (t). Thus, N l (t) N j (t). Let i N j (t) \ N l (t). Then, x i (t) x l (t) > ɛ, and hence, d j (t) > ɛ. Therefore, we obtain ˆπ l (t + ) d l(t) ˆπ j (t + ) d j (t) m ɛ. All in all, in any case for t < T, we have ˆπ l (t + ) d l(t) ɛ 4m. We are now ready to provide our main result, which yields an upper bound for the termination time T. Theorem : For the Hegselmann-Krause dynamics we have T 3m ɛ Vˆπ (x(0), 0). Proof: From Lemma 4 and relation (), with t = T, we obtain Vˆπ (x(t ), T ) = Vˆπ (x(0), 0) T H ij (t)(x i (t) x j (t)). t=0 By using Lemma 4, we further obtain Vˆπ (x(t ), T ) = Vˆπ (x(0), 0) T ˆπ l (t + )d 8m l(t). t=0 Finally, we use Lemma 5 to arrive at Vˆπ (x(t ), T ) = Vˆπ (x(0), 0) Therefore, ɛ ɛ 3m T. t=0 (T ) 3m Vˆπ(x(0), 0), and the desired relation follows. The bound of Theorem shows that the convergence time is of the order of m, provided that the value Vˆπ (x(0), 0)

does not depend on m. One may further analyze the bound to take Vˆπ (x(0), 0) into account. We do so by taking the worst case into consideration. By further bounding Vˆπ (x(0), 0), in the worst case, we have Vˆπ (x(0), 0) = ˆπ i (0) (x i (0) ˆπ (0)x(0)) i= i= d (x(0)) ( ) ˆπ i (0) x i (0) min x j(0) j [m] i= i= ˆπ i (0), where d(x(0)) = max i [m] x i (0) min j [m] x j (0). Since ˆπ(0) is a stochastic vector, we obtain Vˆπ (x(0), 0) d (x(0)) m ɛ. [0], Repeated averaging and bounded confidence-modeling, analysis and simulation of continuous opinion dynamics, Ph.D. thesis, Universität Bremen, 007. [] L. Moreau, Stability of multi-agent systems with time-dependent communication links, IEEE Transactions on Automatic Control 50 (005), no., 69 8. [] B. Touri, Product of random stochastic matrices and distributed averaging, PhD Thesis, University of Illinois at Urbana-Champaign, to appear in Springer Theses series, 0. Thus, as the worst case bound, we have the following result T 3m 4. Thus, 3m 4 is an upper estimate for the termination time of the Hegselmann-Krause opinion dynamics. The bound is of the order m 4, which is by the factor of m better than the previously known bound [3]. V. DISCUSSION In this paper, we have considered the Hegselmann-Krause model for opinion dynamic. By choosing an appropriate adjoint dynamics and Lyapunov comparison function, we have established an upper bound for the termination time of the dynamics. For a collection of m agents, our bound is of the order m under some assumptions, while it may be of the order of m 4 in worst case. In the worst case, our bound improves the previously known bound by a factor of m. It however, remains to explore the tightness of these bounds. REFERENCES [] V.D. Blondel, J.M. Hendrickx, and J.N. Tsitsiklis, On Krause s multiagent consensus model with state-dependent connectivity, IEEE Transactions on Automatic Control 54 (009), no., 586 597. [], Continuous-time average-preserving opinion dynamics with opinion-dependent communications, SIAM J. Control Optim. 48 (00), 54 540. [3] F. Bullo, J. Cortés, and S. Martínez, Distributed control of robotic networks, Applied Mathematics Series, Princeton University Press, 009. [4] S. Castellano, S. Fortunato, and V. Loreto, Statistical physics of social dynamics, Rev. Mod. Phys. 8 (009), no., 59 646. [5] G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch, Mixing beliefs among interacting agents, Advances in Complex Systems 3 (000), 87 98. [6] R. Hegselmann, Opinion dynamics insights by radically simplifying models, Laws and Models in Science (004), 9. [7] R. Hegselmann and U. Krause, Opinion dynamics and bounded confidence models, analysis, and simulation, Journal of Artificial Societies and Social Simulation 5 (00). [8] L. Li, A. Scaglione, A. Swami, and Q. Zhao, Opinion dynamics under the generalized deffuant-weisbuch model, preprint submitted to IEEE CDC, 0. [9] J. Lorenz, A stabilization theorem for continuous opinion dynamics, Physica A: Statistical Mechanics and its Applications 355 (005), 73.