Optimal consensus and opinion dynamics

Size: px
Start display at page:

Download "Optimal consensus and opinion dynamics"

Transcription

1 DEGREE PROJECT IN MATHEMATICS, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2016 Optimal consensus and opinion dynamics OTHMANE MAZHAR KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

2

3 Optimal consensus and opinion dynamics OTHMANE MAZHAR Master s Thesis in Optimization and Systems Theory (30 ECTS credits) Master Programme in Applied and Computational Mathematics (120 credits) Royal Institute of Technology year 2016 Supervisor at KTH: Xiaoming Hu Examiner: Xiaoming Hu TRITA-MAT-E 2016:67 ISRN-KTH/MAT/E--16/67--SE Royal Institute of Technology SCI School of Engineering Sciences KTH SCI SE Stockholm, Sweden URL:

4

5 Abstract In the following thesis we study the influence of the communication graph on the behavior of multi-agent systems. Specifically we investigate two issues the first is concerned with the existence of consensus control for linear dynamics, the second is a study of the behavior of a nonlinear dynamical related to opinion dynamics. For the finite optimal consensus problem of multi-agent system, we formulate the problem as an optimization problem on a Hilbert space to model the graph neighborhood constraints. Then we show that completeness of the graph is a necessary and sufficient condition of the existence of a finite time linear control that guarantees consensus in finite time. As a extension of this result we show that the optimal control we get is also optimal among the larger class of non linear control, and that it can be implemented as optimal for connected but not complete graphs if we replace the neighborhood restriction by a feedback control using the information of all edges of the graph. The second part is a study of a modified version of continuous opinion dynamic model by introduced Hegselmann an Krause. To modify the model we introduce stubborn agents; agent whose opinion do not change over time, specifically we introduce two types of agents: one that can influence the whole distribution at ones and we call it of positive influence and the other with a bounded influence and we call it of non negative influence. For each type introduced we study the topological properties of the distribution and the clustering phenomena observed but also the statistical properties and we do so in the presence of one or two stubborn agent. We finally end this part by two possible applications of the use of stubborn agents for reaching consensus or tracking trajectories.

6

7 Abstract I denna avhandling studerar vi inflytandet av kommunikationsgrafens beteende hos multi agent system. Vi undersöker särskilt två frågor, den första handlar om existensen av konsensus styrlag för linjär dynamik och den andra studerar beteendet hos ickelinjär dynamik relaterad till opinionsdynamik. För det ändliga optimala konsensus problemet hos multi agent system formuleras det som ett optimeringsproblem i ett Hilbert rum för att modellera grafens grannvillkor. Vi visar att kompletthet hos grafen är ett nödvändigt och tillräckligt villkor för existens av en ändlig linjärtidsstyrlag som garanterar konsensus i ändlig tid. Som en utvidgning av detta resultat visar vi att den optimala styrlagen även är optimal i en större klass av ickelinjär reglering och att den kan implementeras som optimal för slutna men ej kompletta grafer om vi ersätter grannvillkoren med en återkopplande styrlag genom att använda informationen hos all kanter i grafen. Den andra delen av denna studie är en modifierad version av den kontinuerliga åsiktsdynamikmodellen som introducerades av Hegselmann och Krause. För att modifiera modellen introducerade vi envisa agenter, agenter vars åsikter ej ändrar över tiden, mer specifikt, vi introducerar två typer av agenter, en som kan påverka hela fördelningen på en gång och vi kallar detta positiv influens and den andra har ett begränsat inflytande och vi kallar denna icke negative influens. För varje introducerad typ studerar vi de topologiska egenskaperna av fördelningen och klustringsfenomen observerades men även de statisitiska egenskaperna och vi gjorde detta vid närvaro av en eller två envisa agenter. Vi avslutade denna del med två möjliga applikationer av envisa agenter för att nå konsensus eller följa banor.

8

9 Contents 1 General introduction 2 2 Optimal consensus for a linear multi-agents system in finite time Preliminary Graph theory notations Elements of optimization in Hilbert spaces Projection matrices Problem statement The consensus problem model The consensus problem in a Hilbert space A linear optimal control problem Solution of the problem Extensions Conclusion Opinion dynamics in the presence of stubborn agent Model presentation The behavior with a positive influence The effect of one stubborn static agent The effect of two stubborn static agents The behavior with a non negative influence The effect of one stubborn static agent The effect of two stubborn static agents Application: HK consensus control by a stubborn agent Conclusion

10

11 Chapter 1 General introduction This master thesis consists of study of multi-agent systems where we study conditions under which we can find a consensus control for linear dynamics, but also the behavior of simple nonlinear dynamics such as the so called Hegselmann-Krause bounded confidence model. In the first part we address the finite optimal consensus problem of a linear time invariant multi-agent system with the graph topology constraints. Work on this area has been done in [3, 10] for a homogeneous system of agents with linear dynamics, both in finite time where we require consensus at a fix T and infinity time where we want an asymptotic consensus. In their papers [1, 2] Cao, Y. and Ren, W. addressed the optimal consensus problem for systems of mobile agents with single-integrator dynamics. In this setting, the authors constrain the agents to use only relative information in their controllers. In this setting the authors show that the graph Laplacian matrix used in the optimal controller for the system corresponds to a complete directed graph. Another line of research on the optimal consensus problem has been taken by Semsar-Kazerooni, E. and Khorasani, K. in [5] in which the consensus requirement is imposed by the cost function. However, with such a formulation the optimal controller in general can not be implemented with relative state information only. Important results for the finite time case also known as rendezvous problem has been provided by Thunberg, J. and Hu, X. in [8] where it is shown that for a homogeneous systems of agents with linear dynamics no linear, time-invariant feedback control law based on relative information of the state can guaranty consensus in finite time. By relative information, we mean using only the pairwise differences between states of the pairs of agents if the two agents communicate. They also show that a time-varying optimal output feedback control using the relative information only exists when the communication graph is complete, and that this control can be obtained from the solution of the problem no topology constraint. The general difficulty for finding a consensus reaching control that depends only on relative information respecting the graph topology, is that there is no general characterization of the set of such controller. To avoid this issue we formulate the consensus problem as an optimal control problem, and we use functional analysis techniques to impose such constraints. Thus formulating the problem as a minimum norm problem in a Hilbert space, and we show that for these kind of problems Lagrange Multipliers conditions are necessary and sufficient to guaranty optimality. Then by restricting ourselves to the class of linear time varying control we notice that graph topology constraint can be fulfilled 2

12 by imposing structural restriction on the control derivative which can be taken account of if we use Lagrange Multipliers. Solving the Lagrange Multipliers conditions leave us with two results, in one hand the we get a general formula for the consensus reaching optimal control, on the other hand we get algebraic condition on the graph topology that we use to determine for which graphs we can find an consensus reaching control. In the second part we look to variants of the Hegselmann and krause opinion dynamics with bounded confidence. The opinion dynamic model presented here is about opinion compromise of different agents. These kind of models were introduced by Hegselman and Krause on their original study [1] to capture the interaction that arise between them. The general difficulty in analyzing these models comes from the state dependence topology. In their work [7, 8] V.D. Blondel and J.M. Hendrickx and J.N. Tsitsiklis address these difficulties by showing convergence properties for both the discrete and continuous time bounded influence HK model. Here we also assume continuous opinions, and that all agents have bounded confidence in such a way that they only consider opinions that are close to them in their dynamics. This model lead to clustering of opinions and a natural that we investigate is how agents that decide to not change their opinions will influence the entire dynamic. In this paper we will consider the model of Hegselmann and Krause with the introduction of various type of the so called stubborn agents. The previous study of the HK opinion dynamic model shows that even-though we get a clustering phenomena not all initial positions that start with a connected graph of interaction will result in an asymptotic consensus such as in [2, 3, 5]. In fact discontinuities in the graph topology take place and cannot be reversed during the interaction and the graph becomes disconnected, that is one of the challenges that having a dynamics that depend on the states differences give us. The loss of connectivity can yield several clusters of agent opinions in different positions, and if a stubborn agent wants to steer the distribution as a whole to some opinion of consensus the he should influence the different clusters. However, even under these simplistic hypothesis of one-dimensional continuous time opinions, few theoretical results exist to relate the initial distribution to the final clusters and their size but also to control the final result. Although the HK opinion dynamic model can be easily extended to higher dimensional spaces and that some of our results are easily verified in this more general setting, in this part we are mainly interested in the one dimensional continuous case. On the other hand, we try to make the model richer by the introduction of new agent whose opinions are not influenced by the interaction, called stubborn. The modified model that we provide in this study guarantees opinion consensus and steering to some common value for almost any initial opinion distribution provided that the region of influence of the stubborn agent is wide enough, gives a way to model more social phenomena like the existence social lobbies and the partition of the population into a left a right and middle and finally provide a device for trajectory taking in this simple model. The remainder of this study is structured as follows: we start by recalling relevant results on the standard HK opinion dynamic model. We formulate introduce two new type of stubborn agents, one type having the ability to influence the whole distribution and the other of bounded influence. For each one of these we study different scenarios with one or two stubborn agents and with static and dynamic opinions. Finally, we show how these agents make trajectory tracking possible as an application. 3

13 Chapter 2 Optimal consensus for a linear multi-agents system in finite time In this part we address the finite optimal consensus problem of a linear time invariant multi-agent system with the graph topology constraints. The general difficulty for finding a consensus reaching control for these kind of system that depends only on relative information, is that there is no general characterization of the set of such control. To avoid this we start by formulating the consensus problem as an optimal control problem, and we transform the graph topology constraint to analytic constraint, Thus formulating the problem as a minimum norm problem in a Hilbert space. We show that for these kind of problems Lagrange Multipliers conditions are necessary and sufficient characterization of the optimal solution. By restricting ourselves to the class of linear time varying control we notice that graph topology constraint can be fulfilled by imposing certain structure on the gradient of the control which is the only requirement for using Lagrange Multipliers. The solution to the Lagrange Multipliers system of equations give us, in one hand a general formula for the consensus reaching optimal control, on the other an algebraic condition to be fulfilled by the graph topology if one hope to find an consensus reaching control. 2.1 Preliminary We first start by recalling some useful definitions and results from graph theory, functional analysis and linear algebra, and establish some properties that will be used latter on Graph theory notations A undirected graph G consists of a set of vertices, or nodes, denoted V, and a set of edges Λ V 2, where a = (v, w) = (w, v) Λ and v, w Λ. If every possible arc exists, the graph is said to be complete or totally connected. A path on G of length N from v 0 to v N is an ordered set of distinct vertices {v 0, v 1,, v N } such that (v i 1, v i ) Λ for all i [1, N]. If a path exists from every v i to every v j the graph is said to be connected otherwise it is disconnected. 4

14 A path on G of length N from v 0 to v N is an ordered set of distinct vertices {v 0, v i,, v N } such that (v i 1, v i ) Λ for all i 1 : N. The adjacency matrix A of a graph G is a square matrix of size V number of vertices, defined by A ij = 1 if (v i, v j ) Λ, and zero otherwise. Note that A is uniquely defined by the graph up to a permutation depending on the enumeration similarity of the vertices. From the adjacency matrix A we define the Laplacien of the graph L as follow: Let D be the matrix with the out-degree of each vertex along the diagonal. The Laplacian of the graph is defined as L = D A and the normalized Laplacian as L = D 1 (D A) where D 1 is the diagonal matrix of inverse of out-degrees with a zero for each node with out-degree zero. Figure 2.1: A drawing of tree labeled undirected graphs G1, G2 and G3. Example. The drawing of Figure: 2.1 show three undirected graphs G1, G2 and G3. The graph G1 is complete since the number of edges is maximal. For graph G2 and G3 are connected but not complete, let G2 = (V 2, Γ 2 ) where V 2 = {1, 2, 3, 4} the set of vertexes of G2 and Γ 2 = {(1, 2), (1, 4), (2, 4), (2, 3), (3, 4)} the set of edges of G2, then the Laplacian L 2 and normalized Laplacian M 2 of G2 are: L 2 = and M =

15 2.1.2 Elements of optimization in Hilbert spaces Of interest to us are minimum norm problem with respect to a linear variety in Hilbert spaces. Definition 1. A set H is a called real Hilbert space if H is a R vector space, together with a real functional (.,.) on H 2 with the following properties: (x, x) 0 (x, x) = 0 if and only if x = 0 (x + y, z) = (x, z) + (y, z) for all x, y, z H (λx, y) = λ(x, y) for all λ R and x, y H (x, y) = (y, x) for all x, y H H is complete Proposition The Hilbert space H is a banach space with norm x = (x, x) Example. Two classical Hilbert spaces that we work with here are L 2 (R d, λ) the set of square integrable functions on R d with the Lebesgue measure, and (R l,. 2 ) the l dimensional euclidean space. One important theorem in Hilbert spaces and that will be important for the us is the projection theorem. Theorem (The projection theorem). Let M be a closed convex set in a Hilbert space H for every x 0 H there exists a unique point y 0 M such that x 0 y 0 = inf x M x 0 y 0 furthermore a necessary and sufficient condition for y 0 to be the unique minimizing vector is that (x 0 y 0, y y 0 ) 0 for all y M. We will use this result for the case where V is a closed linear variety. Corollary Let V be a closed linear variety in a Hilbert space such that V = x 0 + M where M is a closed subspace of H. there is a unique y 0 V of minimum norm. furthermore a necessary and sufficient condition for y 0 to be the unique minimizing vector is that (y, y 0 ) = 0 for all y M. Another important result in optimization theory, although it establish only necessary conditions for optimality is the Lagrange Multipliers theorem. 6

16 Definition 2. Let F be a continuously differentiable function from an open set D in a Banach space X into a Banach space Y. If x 0 D is such that F (x 0 ) maps X onto Y, the point x 0 is said to be a regular point of the function F. Theorem (Lagrange multiplier). If the continuously differentiable functional f has a local extremum under the constraint H(x) = 0 at the regular point x 0. We define the Lagrangian as: L(x, z 0 ) = f(x) + z 0 H(x) then there exists an element z 0 Z such x 0 is a stationary point of L i.e x L(x 0, z 0 ) = 0. Z is the dual space of Z and is identified with Z in the case of a Hilbert space. For the minimum norm problems we can establish an equivalence between Lagrange duality and the projection theorem. Theorem Let A linear operator from X to Y Hilbert spaces and b Y. we define f(x) = x 2, H(x) = Ax b and L(x, z) = f(x) + zh(x). if b 0 and {x/ax b = 0}. Then the minimum norm problem has a unique solution, moreover, x 0 solve the minimum norm problem (P ) : minimize{ x 2 /Ax b = 0} if and only if x 0 V and exists z 0 such that x L(x 0, z 0 ) = 0. Proof. Let M = ker(a), then M is a close subspace: closed since M is the inverse image of {0} and A is continuous and linear since A is linear. Let y 0 {x/ax b = 0}, y 0 is then different from 0 since b 0, hence V = y 0 + M is closed linear variety.by the projection theorem, there is a unique x 0 V of minimum norm. The only if part follows directly from Lagrance multiplier theorem: Since the minimum norm problem has a solution x 0 0 as b 0 and x (x, x)h = 2(x, h) is onto except in 0 hence x 0 is a regular point. By Lagrange multiplier theorem there exists z 0 such that L(x 0, z 0 ) is stationary. The if part: Suppose x 0 V is a stationary point of the Lagrangian for some z 0 then: x L(x 0, z 0 ) = 0 2(x 0, h) + (z 0, Ah) = 0 h X 2(x 0, h) + (A z 0, h) = 0 h X where A is the adjoint operator of A (2x 0 + A z 0, h) = 0 h X 2x 0 + A z 0 = 0 x 0 = 1 2 A z 0 then for all x M = ker(a) we get: (x 0, x) = ( 1 2 A z 0, x) = ( 1 2 z 0, Ax) = 0 Hence x 0 is orthogonal to M. By the projection theorem, x 0 is the optimal solution. 7

17 Lemma if α(t) and β(t) are continuous in [ T ] and [α(t)h(t) + β(t)ḣ(t)]dt = 0 for every h C 1 [ T ] with h( ) = h(t ) = 0, then β is differentiable and β(t) = α(t) in [ T ] Projection matrices An application of the minimum norm theory developed previously in the Euclidean vector space R n is the minimum norm vector to a subspace. Here we illustrate this problem, present the solution and establish its equivalence to the solution of the orthogonal projection problem. which lead us to introducing the concept of projection matrices. Problem 2.1. Let b R n and a 1, a 2,, a k independent vectors of R n, we want to find p span{a 1, a 2,, a k } such that b p orthogonal to span{a 1, a 2,, a k }. p is a solution to this problem if for all i 1 : k, a T i (b p) = 0, which is equivalent to A T (b p) = 0 where A = [a 1 a 2 a k ]. But since p span{a 1, a 2,, a k } and a 1, a 2,, a k independent then there are unique x 1, x 2,, x k such that p = x 1 a 1 + x 2 a x k a k = Ax where x = [x 1, x 2,, x k ] T. Then we get: A T (b p) = 0 A T Ax = A T b x = (A T A) 1 A T b p = A(A T A) 1 A T b (A T A) 1 exists since a 1, a 2,, a k are independent then A and A T A are full column rank. we call projection matrix P = A(A T A) 1 A T hence the solution of our problem is p = P b. Next we look to a related result that will be shown to be equivalent, the solution to the error minimization in R n also known as least square problem. In this problem we are interested in finding p range(a) such that it is the closest to b. Problem 2.2. the least square problem with respect the range of A is defined by the following set of equations: a direct computation shows that: d b Ax 2 dt minimize b Ax 2 p = Ax = 2(A T b A T Ax) = 0 then x = (A T A) 1 A T b and p = P b = A(A T A) 1 A T b. The following proposition summarize properties of projection matrices. 8

18 Proposition Let P be a projection matrix onto span{a 1, a 2,, a k } then for all x R n and v span{a 1, a 2,, a k } we have the following: 1. rang(p ) = span{a 1, a 2,, a k } 2. P T = P 3. P 2 P = 0 4. P x x 5. v P x v x 2.2 Problem statement In this section we consider the problem of finding a consensus reaching optimal control for a linear multi-agents system in finite time The consensus problem model We consider a l agent modeled by linear time invariant systems {x 1, x 2,, x l } with the corresponding control {u 1, u 2,, u l }, such that: x i = Ax i + Bu i for all i 1 : l x i ( ) = x 0 i where x 0 i is the initial position of the agent i at time. We say that a consensus is reached in finite time T if for t = T we get: x 1 (T ) = x 2 (T ) = = x l (T ) The relative information in i with respect to a neighboring node j is z ij = x i x j. The control is said to be of relative information if for every agent x i the control u i is only a function of z ij for all j N i where N i is the set of neighboring agents is the communication graph G, and we write u i ((z ij ) j Ni ) or u i ((x i x j ) j Ni ). The control energy for one of the agents is given by l i=1 u i 2 dt. u 2 dt and for the hole system is 9

19 Problem 2.3. Our consensus reaching optimal control problem is then modeled by the following set of equations: minimize subject to : l i=1 u i 2 dt x i = Ax i + Bu i i 1 : l (2.2.1) x 1 (T ) = x 2 (T ) = = x l (T ) (2.2.2) x i ( ) = x 0 i i 1 : l u i ((z ij ) j Ni ) i 1 : l (2.2.3) The consensus problem in a Hilbert space we define the state variable x and the control u as: x 1 u 1 x 2 x =. and u = u 2. x l u l This problem can be stated as a minimum norm problem in a Hilbert space. Proposition H 1 the set of all controls u and H 2 the set of controls u depending only on relative information u i = (z ij ) j Ni are Hilbert spaces. Proof. H 1 is a Hilbert space since it can be identified with ((L 2 (R d, λ)) l,. 2 ) the composition of the two Hilbert spaces L 2 (R d, λ) and (R l,. 2 ). H 2 is a Hilbert space as a closed subspace of H 1. Equation can be rewritten more compactly as ẋ = I Ax+I Bu in an integral form this equation becomes x(t) x 0 I Ax + I Budt = 0 as for equation can be rewritten as a linear system of equation 1 D Ix = 0 with ker(d) = 1. 1 Hence we we get the following minimum norm problem in a Hilbert space. 10

20 Problem 2.4. minimize subject to : l i=1 u i 2 dt x(t) x 0 D Ix = 0 x( ) = x 0 I Ax + I Budt = 0 u H A linear optimal control problem A standard solution to this problem would be to use the projection theorem, but this approach meets some difficulties due to the absence of an explicit characterization of the element of H 2. Instead we start by making the following observation, if an optimal control u that depend only on relative information is to be found then: du i = j N i zij u i dz ij = j N i xi u i (dx i dx j ) denote xi u i = K i and n i = card(n i ) and K = diag(k 1, K 2,, K l ) then with L the Laplacien of the graph. n 1 K 1 K = K(L I) K l n l K l hence du = K(L I)dx we restrict ourselves to the space of linear control u = K(L I)x a closed subspace of H 2, and define our problem in this space as follow. Problem 2.5. minimize subject to : x T (L T IK T KL I)xdt x(t) x 0 D Ix = 0 x( ) = x 0 (I A + (I B)K(L I))xdt = 0 11

21 2.3 Solution of the problem Since we established that Lagrange multipliers conditions are equivalent to the projection theorem for minimum norm problems in Hilbert spaces, we will use them to find necessary and sufficient conditions for optimality. we start by defining the following functions and their derivatives. denote: l(x, k) = x T (L T IK T KL I)x f(x, k) = (I A + (I B)K(L I))x G(x) = D Ix Proposition for all h and v we have: x l(x, k)h = 2x T (L T IK T KL I)h (2.3.1) k l(x, k)v = 2x T (L T IK T vl I)x (2.3.2) x f(x, k)h = (I A + (I B)K(L I))h k f(x, k)v = (I B)v(L I))x x G(x)h = D Ih Proof. we prove and 2.3.2, the others can be obtained in a similar way. for all h we have: l(x + h, k) l(x, k) = (x + h) T (L T IK T KL I)(x + h) x T (L T IK T KL I)x = h T (L T IK T KL I)x + x T (L T IK T KL I)h + h T (L T IK T KL I)h = 2x T (L T IK T KL I)h + o( h 2 ) hence: x l(x, k)h = 2x T (L T IK T KL I)h l(x, k + v) l(x, k) = x T (L T I(K + v) T (K + v)l I)x x T (L T IK T KL I)x = x T (L T IK T vl I)x + x T (L T Iv T KL I)x + x T (L T Iv T vl I)x = 2x T (L T IK T vl I)x + o( v 2 ) hence: k l(x, k)v = 2x T (L T IK T vl I)x Next we use Lagrange multiplier theorem to move from an optimization problem to a system of differential equations. Lemma K solve problem 3 if and only if it the following set of differential equations has a solution: λ = (I A T + (L T I)K T (B T I))λ + 2(L T IK T KL I)x (2.3.3) λ(t ) = (D T I)µ (2.3.4) I B T λ = 2K(L I)x (2.3.5) ẋ = (I A + (I B)K(L I))x (2.3.6) x( ) = x 0 (2.3.7) (D I)x(T ) = 0 (2.3.8) 12

22 Proof. using lagrange multipliers theorem on problem 3 we get: we start by 2.3.9: λ BV dl [ T ] and µ R dl such that for all h and v x lhdt + k lvdt dλ T [h dλ T 2x T (L T IK T KL I)hdt + t t x fhdτ] + µ T x Gh(T ) = 0 (2.3.9) k fvdτ = 0 (2.3.10) dλ T [h t +µ T D Ih(T ) = 0 (I A + (I B)K(L I))hdτ] without loss of generality we may take λ(t ) = 0, and integrating by parts the third term we get. 2x T (L T IK T KL I)hdt + dλ T h + dλ T (I A + (I B)K(L I))hdτ] +µ T D Ih(T ) = 0 λ can t have jumps on [ T ) since otherwise h that makes the second term larger then the rest. To account for the last term there must be a jump at T hence: λ(t ) = D T Iµ integrating the second therm by parts we get: 2x T (L T IK T KL I)hdt λ T ḣ + λ T (I A + (I B)K(L I))h dt = 0 since h is arbitrary we have: λ = (I A T + (L T I)K T (B T I))λ + 2(L T IK T KL I)x λ(t ) = (D T I)µ equation on the other hand gives: t 2x T (L T IK T vl I)xdt dλ T (I B)v(L I))xdτ = 0 integrating the second therm by parts we get: 2x T (L T IK T vl I)xdt + since it must be satisfied for all v the we must have: I B T λ = 2K(L I)x λ T (I B)v(L I))xdt = 0 hence we get the following 3 Lagrange multipliers conditions: λ = (I A T + (L T I)K T (B T I))λ + 2(L T IK T KL I)x λ(t ) = (D T I)µ I B T λ = 2K(L I)x 13

23 to get necessary and sufficient optimality conditions we must add the 3 initial constraints thus proving the Lemma: ẋ = (I A + (I B)K(L I))x x( ) = x 0 (D I)x(T ) = 0 Lemma K solve problem 3 for some connected graph G with a Laplacien matrix L if and only if there is D such that: 1 ker(d) = 1. and M = DT (DD T ) 1 D 1 and we have: K 1 = K 2 = = K l = B T exp(a T (T t))w 1 (t, T )exp(a(t t)) Proof. starting from and we get: hence: from we have: back to we get the following identity: and becomes: λ = (I A T )λ λ = I exp(a T (T t))λ(t ) λ = D T exp(a T (T t))µ K(L I)x = 1 2 DT B T exp(a T (T t))µ ẋ I Ax = 1 2 DT BB T exp(a T (T t))µ multiplying both sides by I exp(a(t t) and integrating from to T : x(t ) I exp(t )x( ) = 1 2 DT we define W (, T ) = to get: exp(a(t t))bb T exp(a T (T t))dtµ exp(a(t t))bb T exp(a T (T t))dt the Reachability Gramien x(t ) I exp(t )x( ) = 1 2 DT W (, T )µ multiplying by D I and using we have: D exp(a(t ))x( ) = 1 2 DDT W (, T )µ 14

24 solving for µ we should have: µ = 2(DD T ) 1 D W 1 (, T )exp(a(t ))x( ) then we can get: u( ) = K( )(L I)x( ) = 1 2 DT B T exp(a T (T ))µ = D T (DD T ) 1 D B T exp(a T (T ))W 1 (, T )exp(a(t ))x( ) by the dynamic programming principle we must have for all t: u = K(L I)x = D T (DD T ) 1 D B T exp(a T (T t))w 1 (t, T )exp(a(t t))x = ( I B T exp(a T (T t))w 1 (t, T )exp(a(t t)))(d T (DD T ) 1 D I)x then we must have: K 1 = K 2 = = K l = B T exp(a T (T t))w 1 (t, T )exp(a(t t)) L = D T (DD T ) 1 D Remark. To connect this result to the Linear Quadratic Control problem and get an interpretation for why the Reachability Gramien W appears in the the control, we notice that would be satisfied if there is a matrix g such that: K = 1 2 (I BT )(I g) = 1 2 (I BT g) λ = (I g)(l I)x = (L g)x from and we would get: λ = (I A T )λ (L ġ)x + (L g)ẋ = (I A T )(L g)x (L ġ)x + (L g)(i A + (I B)K(L I))x = (L A T g)x (L ġ)x + (L g)(i A 1 2 (L BBT g)x = (L A T g)x (L ġ)x + (L ga) + (L A T g)x 1 2 (LL gbbt g)x = 0 if L is chosen to be the normalized Laplacien for the complete graph then LL = L and since the last equality must hold for all x we would get g solving the following Riccati Equation: ġ + ga + A T g 1 2 gbbt g = 0 15

25 if we take g = 1 2 exp(at (T t))w 1 (t, T )exp(a(t t)) then we have: ġ = A T g + ga 1 2 exp(at (T t))w 1 Ẇ W 1 (t, T )exp(a(t t)) = A T g + ga 1 2 exp(at (T t))w 1 exp(a(t t))bb T exp(a T (T t) W 1 (t, T )exp(a(t t)) = A T g + ga 2gBB T g then g solve the equation and the optimal control is: u = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x we show next that this case is indeed the only one to consider. 1 Lemma Let D 1 and D 2 such that ker(d 1 ) = ker(d 2 ) = 1. then: 1 D1 T (D 1 D1 T ) 1 D 1 = D2 T (D 2 D2 T ) 1 D 2 1 Proof. denote e = since we have: 1. 1 D T (DD T ) 1 DD T (DD T ) 1 D = D T (DD T ) 1 (DD T )(DD T ) 1 D = D T (DD T ) 1 D then P (X) = X 2 X = X(X 1) is the minimum polynomial of D T (DD T ) 1 D and D T (DD T ) 1 D has only 0 and 1 as eigenvalues. Since ker(d) = e and D is l 1 by l matrix, then D has independent rows and D T (DD T ) 1 D is a l by l matrix with one eigenvalue 0 and l 1 eigenvalue of 1. D T (DD T ) 1 D is also the projection matrix onto V = {e} the orthogonal complement to the subspace generated by e. Let e, v 2, v 3,, v l eigenvectors of D1 T (D 1 D1 T ) 1 D 1 and e, w 2, w 3,, w l eigenvectors of D2 T (D 2 D2 T ) 1 D 2 where e correspond to the zero eigenvalue. Let x R l with x = a v e + l α i v i = a w e + l β i w i. i=2 i=2 Notice that e is orthogonal ot v i and w i for all i 1 : l, indeed for all i 1 : l we have: (e, v i ) = (e, D T 1 (D 1 D T 1 ) 1 D 1 v i ) = (D T 1 (D 1 D T 1 ) 1 D 1 e, v i ) = 0 and the same holds for w i. Thus, (x, e) = a v (e, e) = a w (e, e) and a v = a w = a. l l (D1 T (D 1 D1 T ) 1 D 1 D2 T (D 2 D2 T ) 1 D 2 )x = α i v i β i w i i=2 i=2 = (x ae) (x ae) then D T 1 (D 1 D T 1 ) 1 D 1 = D T 2 (D 2 D T 2 ) 1 D 2 16 = 0

26 now we summarize the results of the last three lemmas in a theorem. Theorem The finite time linear optimal consensus problem has a solution for some connected graph G if and only if G is complete. In this case the consensus reaching optimal control is given by: u = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x Proof. Follows directly from the last three lemmas. Corollary There exits a consensus reaching linear feedback control in finite time for a multi-agents system if and only if the communication graph is complete. Proof. Follows directly from the last theorem. 2.4 Extensions We provide here two extension that highlight other properties of the solution presented previously. For the first extension we consider the same problem as before but without the graph topology constraint and we derive an optimal feedback control for the complete graph as it was done in [8]. In the absence of the neighborhood constraints we can easily derive a solution using the projection theorem. Problem 2.6. Our consensus reaching optimal control problem for a topology free system is then modeled by the following set of equations: minimize subject to : l i=1 u i 2 dt x i = Ax i + Bu i i 1 : l x 1 (T ) = x 2 (T ) = = x l (T ) x i ( ) = x 0 i i 1 : l The next theorem shows that for the complete graph case the linear time varying control obtained previously is optimal even among non linear controls. Theorem For any finite time T and any initial condition x 0 the topology free optimal consensus problem has the following solution: u(t) = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x 0 This can be rewritten as the following optimal feedback: or for each agent: u(x, t) = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( (x k x i )) 17 k=1

27 Proof. In integral form the topology free optimal consensus problem is stated as follow: minimize subject to : l i=1 u i 2 dt exp(a(t ))Bu i dt = exp(a(t ))x T i x 0 i D Ix = 0 x( ) = x 0 The first constraint can be rewritten as: exp(a(t T ))Bu i dt + exp(a( T ))x 0 i = x i (T ) using the second constraint we then get: D exp(a(t T ))Budt + D exp(a( T ))x 0 = D x(t ) = 0 D exp(a( T ))x 0 = D exp(a(t T ))Budt This problem falls into the category of minimum norm problems with respect to a linear variety, using the projection theorem we get the existence of µ such that: This amount to: D exp(a( T ))x 0 = DD T u = D T B T exp(a T (T t))µ D exp(a( T ))x 0 = DD T W (, T )µ µ = (DD T ) 1 D W (, T ) 1 exp(a( T ))x 0 substituting into the formula for u we get: exp(a(t T ))BB T exp(a T (T t))µdt u(t) = D T (DD T )D B T exp(a T (T t))w (, T ) 1 exp(a( T ))x 0 u(t) = L B T exp(a T (T t))w (, T ) 1 exp(a( T ))x 0 The dynamic programming principle gives: u(x, t) = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( (x k x i )) k=1 Our previous analysis shows that we don t have a linear consensus reaching control under the graph neighborhood constraint. In this next extension we will show that if one relaxes these constraints to a control depending only on the information coming from 18

28 the graph, but where each agent is allowed to use the information of every edges of the graph; by opposition to the previous case where he only uses edges that have his node as one extremity; then we still have a linear optimal control, moreover this control is still optimal for a time varying graph as long as the graph stays connected. Let G(t) a time varying graph, (i j)(t) a directed path from i to j at time t for example the one corresponding to the shortest path and Λ (i j)(t) the set of edges in this path. We define the path information z (i j)(t) = (x k x l ). We define the consensus (l,k) Λ (i j)(t) reaching optimal control problem in time varying connected topology as follows: Problem 2.7. Our optimal control problem for a time varying connected topology is then modeled by the following set of equations: minimize subject to : l i=1 u i 2 dt x i = Ax i + Bu i i 1 : l x 1 (T ) = x 2 (T ) = = x l (T ) x i ( ) = x 0 i i 1 : l u i (x, t) = u i ((z (i,j) ) (i,j) Λ(t), t) Here u i (x, t) = u i ((z (i,j) ) (i,j) Λ(t), t) means that agent i uses for his feedback control u i at time t the information z ( i, j) available from all edges of the graph G(t) at time t i.e (i, j) Λ(t). A solution to this problem can be obtained by rewriting the previous solution in a way that takes into account the graph topology as stated in the next theorem. Theorem For any finite time T and any initial condition x 0 the time varying connected topology consensus problem has the following solution the following optimal feedback for each agent: u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( ( x l x k )) j=1 (k,l) Λ (i j)(t) Proof. let M(t) the set of u(t) = [u 1 (t), u 2 (t),, u N (t)] that satisfy the constraint of Problem: 2.7 and M tot the set of u(t) that satisfy the constraint of Problem: 2.6. It is easy to see that: u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( ( x l x k )) j=1 (k,l) Λ (i j)(t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( (x k x i )) hence u = [u 1, u 2,, u N] solve min{ u T u dt; u M tot }. But since M(t) M tot we get: min{ u T u dt; u M tot } min{ u T u dt; u M(t)} and u is feasible in both problems with same objective value hence u is optimal for problem: 2.7. k=1 19

29 2.5 Conclusion In this part we studied the finite optimal consensus problem of a linear time invariant multi-agent system with the graph topology constraints. The general issue for finding such consensus reaching control depending on relative information only and respecting the topology of the graph, was that there is no clear analytic characterization of the set of such controller. To avoid this issue we formulate the consensus problem as an optimal control problem, which help provide use with general tools from functional analysis to address such constraints. Thus we started by defining the general framework of Hilbert spaces that we will work on, this framework provide us with a powerful characterization of solutions of optimization problem namely the projection theorem. Still for our optimal consensus problem the element of the convex set satisfying the graph topology constraints, even being a linear variety, cannot be provided explicitly. That is way we established an equivalent statement to the projection theorem in the case of a minimum norm problem with respect to a linear variety namely that: a solution to the problem exists if an only if it satisfies Lagrange multipliers conditions. After establishing we restricted our analysis to the set of linear possibly time varying solutions. In this set we notice that the gradient of the solution,if it was to be found, will have the structure of the Laplacien of the communication graph. On the other hand the Lagrange multipliers conditions are stated only in therm of the gradient of the solution in this case, Hence upon solving then we would get a clear characterization of the general solution. These conditions translate to two integral equations that we prove been equivalent to a system of differential equations. Reducing the system give us two results, in one hand the we get a general formula for the consensus reaching optimal control, on the other hand we get algebraic condition on the graph topology that we use to determine for which graphs we can find an consensus reaching control. From there we proved that the only graph satisfying this condition was the complete graph,thus establishing that for the complete graph the problem has a unique solution and for every other graph there is no solution linear ont he state. 20

30 Bibliography [1] Cao, Y. and Ren, W. (2009), LQR-based optimal linear consensus algorithms, in American Control Conference, IEEE, pp [2] Cao, Y. and Ren, W. (2010), Optimal linear-consensus algorithms: an LQRperspective, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 40(3), [3] Kim, H., Shim, H. and Seo, J. (2011), Output consensus of heterogeneous uncertain linear multi-agent systems, IEEE Transactions on Automatic Control 56(1), [4] Luenberger, D. (1997), Optimization by vector space methods, John Wiley and Sons, Inc. [5] Semsar-Kazerooni, E. and Khorasani, K. (2008),Optimal consensus algorithms for cooperative team of agents subject to partial information, Automatica 44(11), [6] Sundram, S. and Hadjicostis, C. (2007), Finite-time distributed consensus in graphs with time-invariant topologies, American Control Conference. [7] Thunberg, J. (2014), Consensus and Pursuit-Evasion in Nonlinear Multi-Agent Systems, PhD thesis, KTH Royal Institute of Technology. [8] Thunberg, J. and Hu, X. (2015), Optimal output consensus for linear systems: A topology free approach, arxiv preprint. [9] Wang, L. and Xiao, F. (2010),textitFinite-time consensus problems for networks of dynamic agents, IEEE Transactions on Automatic Control. [10] Xi, J. Shi, Z. and Zhong, Y. (2012a), Output consensus analysis and design for highorder linear swarm systems: partial stability method, Automatica 48(9), [11] Xi, J., Shi, Z. and Zhong, Y. (2012b),Output consensus for high-order linear timeinvariant swarm systems, International Journal of Control 85(4),

31 Chapter 3 Opinion dynamics in the presence of stubborn agent In this part we study variants of the Hegselmann and Krause opinion dynamics model in continuous time with bounded confidence. we are mainly interested in the one dimensional continuous case, although the HK opinion dynamic model can be easily extended to higher dimensional spaces and that some of our results are easily verified in this more general setting. On the other hand, we try to extend this model by introducing new agents, called stubborn, whose opinions are not influenced by the interaction. These modified models that we introduce in this section guarantees opinion consensus and steering to some common value for almost any initial distribution of the opinion provided that the region of influence of the stubborn agent is sufficiently large. This study is structured as follows: we start by recalling relevant results on the standard HK opinion dynamic model. We introduce two new type of stubborn agents, the first type having an infinite radius of influence and thus the ability to influence the whole distribution and the second of bounded influence and thus its action on the distribution is local. For each of these type of agents we study different models with one or two stubborn agents and with static and dynamic behavior. Finally, we show how these agents make trajectory tracking possible as an application. 3.1 Model presentation In this part we introduce the continuous time Hegselmann and Krause bounded confidence model for opinion dynamics in the presence of stubborn agents, the Hegselmann and Krause opinion dynamics model is a well known simple model within the field of opinion dynamics where every agent is willing to compromise and changes his opinion according to the average opinion of the agents whose opinions are sufficiently close to his own. Latter on we will introduce to the model other agents said stubborn. The stubborn agents can be viewed as agents unwilling to compromise thus keeping a constant opinion over time, or changing their opinions according to their own agenda while disregarding the others. They can also be viewed as a control signal used to influence the behavior of the system. We study here the effects of the introduction of stubborn agents driven by both constant 22

32 and time varying control on the asymptotic behavior of the initial distribution of opinions. Then we study the possibility of controlling the whole distribution to get certain behavior. Formally speaking, recall first the initial continuous time Hegselmann and Krause model of bounded confidence. Consider N agents i = 1, 2,, N each having it s own opinion represented by his state x i (t) where i 1 : N and t [0 ), the interaction between one and the others is described by the following average preserving dynamic, for all i 1 : N we have: x i = (x j x i ) j { x i x j R} Where R is the radius of the interaction, noting the set N i (t) = {j { x i (t) x j (t) R}} we get in integral form: x i (t) = x 0 i + t 0 j N i (t) (x j x i )dt Numerical simulations such as Figure:3.1 show that the system converges to clusters inside which all agents share a common opinion. Different clusters lie at a distance of at least R from each other, and often approximately 2R refereed to as the 2R conjecture in [11]. This model was much studied due to its simple formulation, and due to these peculiar behaviors that it exhibits. For most of our simulations we have used variants of the following matlab code. Matlab Code 1 c l e a r a l l, c l o s e a l l, c l c 2 3 tend =1; %s i m u l a t i o n end time 4 tspan =[0 tend ] ; %s i m u l a t i o n time i n t e r v a l 5 L=8; %range o f o p i n i o n s 6 d =.9; %r a d i u s o f i n t e r a c t i o n 7 8 [ t, x]=normal HK ( tspan, d, L, tend ) ; 9 p l o t ( t, x ) %p l o t the r e s u l t s f u n c t i o n [ t, x]=normal HK ( tspan, d, L, tend ) x0 = 0 :. 1 : L ; %equaly spaced o p i n i o n s 14 [ t, x]= ode45 ( normal HK rhs, tspan, x0, [ ], d, L, tend ) ; f u n c t i o n x=normal HK rhs ( t, x,dummy, d, L, tend ) 17 y=x ; %copy o f the o p i n i o n s 18 l y=length ( y ) ; %number o f o p i n i o s 19 f o r i =1: l y 20 z=y kron ( ones ( ly, 1 ), y ( i ) ) ;%r e l a t i f d i f f e r e n c e s 21 l=f i n d ( abs ( z )<d) ; %f i n d the neighbores 22 t=length ( l ) ; %number o f neighbors 23 i f t==0 23

33 24 x ( i ) =0; %update i f no neighbor 25 e l s e x ( i )=sum( z ( l ) ) ; %update i f with neighbors 26 end 27 end The uncontrolled dynamics of standard HK model is governed by local interactions that leads to the formation of clusters as an asymptotic behavior. From the mathematical point of view, clusters are a stable configuration for the system, one effect of adding stubborn agents will be to influence the position of these clusters and possibility to steer all agents to a unique opinion reaching a consensus. The following definitions will be use throughout this part to describe the asymptotic behavior of the system. Definition 3. let x(t) be a solution of the HK dynamic of N agents, we have the following definitions: 1. x is a stable equilibrium of the system x(t) governed by the HK dynamics if for all i j either x i = x j or x i x j > R. 2. F is the set of possible equilibrium if F is a subset of R N and for all y R N, y F implies y is a stable equilibrium. 3. we call opinion or stable opinion x i = lim t x(t) if the limit exists. 4. we call a cluster the set of agents sharing the same opinion. 5. a configuration x is a consensus if x is a stable equilibrium and x 1 = x 2 = = x N. The standard HK model will serve us as a benchmark for studying the new models introducing stubborn agents and thus it will be interesting to recall some of its properties. The average preserving property of the model can be seen by computing the mean: x = 1 N = 1 N = x 0 N x i i=1 N t N i=1 x 0 i + 1 N 0 i=1 j N i (t) (x j x i )dt since N i=1 j N i (t) (x j x i ) = 0 as if j N i then i N j. Hence the average preserving property of this continuous time opinion dynamic model. The variance on the other hand for this model, as shown in Figure: 3.2, can be shown as a decreasing function of 24

34 Figure 3.1: Simulation of the standard HK model with equally spaced agents with an inter-distance of 0.1 a radius of interaction of 0.9, the range of the distribution is 8 and the time of simulation is 1. Asymptotically we observe a clustering phenomena the inter-cluster distance is roughly 2R. time. dv ar(x) dt = 1 d N (x i x) 2 i=1 N dt N = 2 ẋ i (x i x) = 2 i=1 N i=1 ẋ i x i 2n x x = x i (x j x i ) + (i,j) [1:N] N i = (i,j) [1:N] N i (x i x j ) 2 (i,j) N j [1:N] x j (x i x j ) Let F be a subset of R N such that if x F then for all i j either x i = x j or x i x j > 1. Then if x(t) F we get V ar(x) is stationary otherwise it is decreasing. The following theorem summarize some of the properties of the standard HK model, the proof can be seen in [8]. 25

35 Figure 3.2: Variance of the distribution in the standard HK model with equally spaced agents with an inter-distance of 0.3 a radius of interaction of 0.9, the range of the distribution is 6 and the time of simulation is 2. The variance is a non increasing function of the simulation time that converge to a value of approximately 2. Theorem Let x(t) be a solution to the standard HK opinion dynamic model then we have the following properties: 1. the order between the agent is preserved. 2. the opinion of the first is always no decreasing and the last is non increasing. 3. if at some point in time the opinion of two consecutive agents is larger than 1 it remain so forever. 4. the average opinion is preserved and the variance is monotonically non increasing to a constant. 5. x(t) converge to an element x F. Definition 4. we say that a function a(r) is an influence function if a(r) is non increasing non negative and bounded by 1 over [0 ) with lim r a(r) = 0. 26

36 We modify the model by the introduction of new agents said stubborn i = 0 or N + 1 or both, with initial positions x 0 0 and x N+1 0, and with the corresponding controls u 0 (t) and u N+1 (t). These agents can influence the rest of the distribution as follow: ẋ 0 = u 0 ẋ N+1 = u N+1 x i = (x j x i ) + a(r 0,i )(x 0 x i ) + b(r N+1,i )(x N+1 x i ) for all i [1 N] j N i Where r 0,i = x 0 x i, r N+1,i = x N+1 x i for all i 1 : N and a(r) is a non negative non increasing continuous function. In this study we will investigate the effect of having additional agent with various choices of the control function u 0 and u N+1 and influence functions a(r) and b(r). 3.2 The behavior with a positive influence Definition 5. we say that an influence function a(r) is positive if a(r) is positive non increasing continuous with a(0) = 1 and lim r a(r) = 0. The positive influence model is a model in which the influence function is positive, an example of such a function,and the one we shall use in our simulations and some part of this study, is the following exponential influence function: a 1 (r) = exp( r ) Considering the same opinion dynamic model, we start by introducing stubborn agents with positive influence The effect of one stubborn static agent Here we only introduce one stubborn static agent i = 0, static in the sense that it does not change it s opinion, for that we take u 0 = 0. The system then becomes: Model 1. ẋ 0 = 0 x i = (x j x i ) + a(r 0,i )(x 0 x i ) for all i [1 N] j N i For simplicity we take x 0 0 = 0 and for all the other agents x 0 i > 0. For this model Numerical simulations such as in Figure: 3.3 all show the same behavior regardless of the initial conditions or the influence function used, as long as the influence function is positive, in all cases we see a convergence phenomena to one cluster having the opinion of the stubborn agent. This observation lead us to the statement of the following theorem that explain this convergence phenomena and exhibit an exponential rate for it. 27

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time

Hegselmann-Krause Dynamics: An Upper Bound on Termination Time Hegselmann-Krause Dynamics: An Upper Bound on Termination Time B. Touri Coordinated Science Laboratory University of Illinois Urbana, IL 680 touri@illinois.edu A. Nedić Industrial and Enterprise Systems

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

Consensus, Flocking and Opinion Dynamics

Consensus, Flocking and Opinion Dynamics Consensus, Flocking and Opinion Dynamics Antoine Girard Laboratoire Jean Kuntzmann, Université de Grenoble antoine.girard@imag.fr International Summer School of Automatic Control GIPSA Lab, Grenoble, France,

More information

Robust Connectivity Analysis for Multi-Agent Systems

Robust Connectivity Analysis for Multi-Agent Systems Robust Connectivity Analysis for Multi-Agent Systems Dimitris Boskos and Dimos V. Dimarogonas Abstract In this paper we provide a decentralized robust control approach, which guarantees that connectivity

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY Jrl Syst Sci & Complexity (2009) 22: 722 731 MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY Yiguang HONG Xiaoli WANG Received: 11 May 2009 / Revised: 16 June 2009 c 2009

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Structural Consensus Controllability of Singular Multi-agent Linear Dynamic Systems

Structural Consensus Controllability of Singular Multi-agent Linear Dynamic Systems Structural Consensus Controllability of Singular Multi-agent Linear Dynamic Systems M. ISAL GARCÍA-PLANAS Universitat Politècnica de Catalunya Departament de Matèmatiques Minería 1, sc. C, 1-3, 08038 arcelona

More information

LECTURE NOTE #11 PROF. ALAN YUILLE

LECTURE NOTE #11 PROF. ALAN YUILLE LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Consensus-Based Distributed Optimization with Malicious Nodes

Consensus-Based Distributed Optimization with Malicious Nodes Consensus-Based Distributed Optimization with Malicious Nodes Shreyas Sundaram Bahman Gharesifard Abstract We investigate the vulnerabilities of consensusbased distributed optimization protocols to nodes

More information

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Zeno-free, distributed event-triggered communication and control for multi-agent average consensus Cameron Nowzari Jorge Cortés Abstract This paper studies a distributed event-triggered communication and

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Discrete-time Consensus Filters on Directed Switching Graphs

Discrete-time Consensus Filters on Directed Switching Graphs 214 11th IEEE International Conference on Control & Automation (ICCA) June 18-2, 214. Taichung, Taiwan Discrete-time Consensus Filters on Directed Switching Graphs Shuai Li and Yi Guo Abstract We consider

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications NCS Lecture 8 A Primer on Graph Theory Richard M. Murray Control and Dynamical Systems California Institute of Technology Goals Introduce some motivating cooperative control problems Describe basic concepts

More information

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified. PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity

On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity 2586 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 11, NOVEMBER 2009 On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity Vincent D. Blondel, Julien M. Hendrickx, and John N.

More information

Obtaining Consensus of Multi-agent Linear Dynamic Systems

Obtaining Consensus of Multi-agent Linear Dynamic Systems Obtaining Consensus of Multi-agent Linear Dynamic Systems M I GRCÍ-PLNS Universitat Politècnica de Catalunya Departament de Matemàtica plicada Mineria 1, 08038 arcelona SPIN mariaisabelgarcia@upcedu bstract:

More information

Consensus Stabilizability and Exact Consensus Controllability of Multi-agent Linear Systems

Consensus Stabilizability and Exact Consensus Controllability of Multi-agent Linear Systems Consensus Stabilizability and Exact Consensus Controllability of Multi-agent Linear Systems M. ISABEL GARCÍA-PLANAS Universitat Politècnica de Catalunya Departament de Matèmatiques Minería 1, Esc. C, 1-3,

More information

Q1 Q2 Q3 Q4 Tot Letr Xtra

Q1 Q2 Q3 Q4 Tot Letr Xtra Mathematics 54.1 Final Exam, 12 May 2011 180 minutes, 90 points NAME: ID: GSI: INSTRUCTIONS: You must justify your answers, except when told otherwise. All the work for a question should be on the respective

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Graph and Controller Design for Disturbance Attenuation in Consensus Networks 203 3th International Conference on Control, Automation and Systems (ICCAS 203) Oct. 20-23, 203 in Kimdaejung Convention Center, Gwangju, Korea Graph and Controller Design for Disturbance Attenuation in

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Theory and Applications of Matrix-Weighted Consensus

Theory and Applications of Matrix-Weighted Consensus TECHNICAL REPORT 1 Theory and Applications of Matrix-Weighted Consensus Minh Hoang Trinh and Hyo-Sung Ahn arxiv:1703.00129v3 [math.oc] 6 Jan 2018 Abstract This paper proposes the matrix-weighted consensus

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control Outline Background Preliminaries Consensus Numerical simulations Conclusions Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control Email: lzhx@nankai.edu.cn, chenzq@nankai.edu.cn

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

A Graph-Theoretic Characterization of Structural Controllability for Multi-Agent System with Switching Topology

A Graph-Theoretic Characterization of Structural Controllability for Multi-Agent System with Switching Topology Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference Shanghai, P.R. China, December 16-18, 29 FrAIn2.3 A Graph-Theoretic Characterization of Structural Controllability

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

Exact Consensus Controllability of Multi-agent Linear Systems

Exact Consensus Controllability of Multi-agent Linear Systems Exact Consensus Controllability of Multi-agent Linear Systems M. ISAEL GARCÍA-PLANAS Universitat Politècnica de Catalunya Departament de Matèmatiques Minería 1, Esc. C, 1-3, 08038 arcelona SPAIN maria.isabel.garcia@upc.edu

More information

Stabilization and Passivity-Based Control

Stabilization and Passivity-Based Control DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity Vincent D. Blondel, Julien M. Hendricx and John N. Tsitsilis July 24, 2009 Abstract

More information

Review of Controllability Results of Dynamical System

Review of Controllability Results of Dynamical System IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 13, Issue 4 Ver. II (Jul. Aug. 2017), PP 01-05 www.iosrjournals.org Review of Controllability Results of Dynamical System

More information

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:

More information

Clustering and asymptotic behavior in opinion formation

Clustering and asymptotic behavior in opinion formation Clustering and asymptotic behavior in opinion formation Pierre-Emmanuel Jabin, Sebastien Motsch December, 3 Contents Introduction Cluster formation 6. Convexity................................ 6. The Lyapunov

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Automatic Control 2. Nonlinear systems. Prof. Alberto Bemporad. University of Trento. Academic year

Automatic Control 2. Nonlinear systems. Prof. Alberto Bemporad. University of Trento. Academic year Automatic Control 2 Nonlinear systems Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011 1 / 18

More information

Network Flows that Solve Linear Equations

Network Flows that Solve Linear Equations Network Flows that Solve Linear Equations Guodong Shi, Brian D. O. Anderson and Uwe Helmke Abstract We study distributed network flows as solvers in continuous time for the linear algebraic equation arxiv:1510.05176v3

More information

A Graph-Theoretic Characterization of Controllability for Multi-agent Systems

A Graph-Theoretic Characterization of Controllability for Multi-agent Systems A Graph-Theoretic Characterization of Controllability for Multi-agent Systems Meng Ji and Magnus Egerstedt Abstract In this paper we continue our pursuit of conditions that render a multi-agent networked

More information

Control, Stabilization and Numerics for Partial Differential Equations

Control, Stabilization and Numerics for Partial Differential Equations Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua

More information

The servo problem for piecewise linear systems

The servo problem for piecewise linear systems The servo problem for piecewise linear systems Stefan Solyom and Anders Rantzer Department of Automatic Control Lund Institute of Technology Box 8, S-22 Lund Sweden {stefan rantzer}@control.lth.se Abstract

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Convergence Rate for Consensus with Delays

Convergence Rate for Consensus with Delays Convergence Rate for Consensus with Delays Angelia Nedić and Asuman Ozdaglar October 8, 2007 Abstract We study the problem of reaching a consensus in the values of a distributed system of agents with time-varying

More information

Multiscale timestepping technique for MD RAJIBUL ISLAM

Multiscale timestepping technique for MD RAJIBUL ISLAM Multiscale timestepping technique for ODEs and PDEs MD RAJIBUL ISLAM Master of Science Thesis Stockholm, Sweden 2014 Multiscale timestepping technique for ODEs and PDEs MD RAJIBUL ISLAM Master s Thesis

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Constrained controllability of semilinear systems with delayed controls

Constrained controllability of semilinear systems with delayed controls BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 56, No. 4, 28 Constrained controllability of semilinear systems with delayed controls J. KLAMKA Institute of Control Engineering, Silesian

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

OUTPUT CONSENSUS OF HETEROGENEOUS LINEAR MULTI-AGENT SYSTEMS BY EVENT-TRIGGERED CONTROL

OUTPUT CONSENSUS OF HETEROGENEOUS LINEAR MULTI-AGENT SYSTEMS BY EVENT-TRIGGERED CONTROL OUTPUT CONSENSUS OF HETEROGENEOUS LINEAR MULTI-AGENT SYSTEMS BY EVENT-TRIGGERED CONTROL Gang FENG Department of Mechanical and Biomedical Engineering City University of Hong Kong July 25, 2014 Department

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren, Member, IEEE

Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren, Member, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 1, JANUARY 2012 33 Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren,

More information

Controlling and Stabilizing a Rigid Formation using a few agents

Controlling and Stabilizing a Rigid Formation using a few agents Controlling and Stabilizing a Rigid Formation using a few agents arxiv:1704.06356v1 [math.ds] 20 Apr 2017 Abstract Xudong Chen, M.-A. Belabbas, Tamer Başar We show in this paper that a small subset of

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Math Advanced Calculus II

Math Advanced Calculus II Math 452 - Advanced Calculus II Manifolds and Lagrange Multipliers In this section, we will investigate the structure of critical points of differentiable functions. In practice, one often is trying to

More information

Lecture 10: Dimension Reduction Techniques

Lecture 10: Dimension Reduction Techniques Lecture 10: Dimension Reduction Techniques Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2018 Input Data It is assumed that there is a set

More information

Complex Laplacians and Applications in Multi-Agent Systems

Complex Laplacians and Applications in Multi-Agent Systems 1 Complex Laplacians and Applications in Multi-Agent Systems Jiu-Gang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complex-valued Laplacians have been shown to be powerful

More information

Optimal Linear Feedback Control for Incompressible Fluid Flow

Optimal Linear Feedback Control for Incompressible Fluid Flow Optimal Linear Feedback Control for Incompressible Fluid Flow Miroslav K. Stoyanov Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Eigenvectors Via Graph Theory

Eigenvectors Via Graph Theory Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal

More information

Distinct distances between points and lines in F 2 q

Distinct distances between points and lines in F 2 q Distinct distances between points and lines in F 2 q Thang Pham Nguyen Duy Phuong Nguyen Minh Sang Claudiu Valculescu Le Anh Vinh Abstract In this paper we give a result on the number of distinct distances

More information

Second Order Optimality Conditions for Constrained Nonlinear Programming

Second Order Optimality Conditions for Constrained Nonlinear Programming Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)

More information

Semidefinite Programming Duality and Linear Time-invariant Systems

Semidefinite Programming Duality and Linear Time-invariant Systems Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,

More information

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits Ran Raz Amir Shpilka Amir Yehudayoff Abstract We construct an explicit polynomial f(x 1,..., x n ), with coefficients in {0,

More information

On the Scalability in Cooperative Control. Zhongkui Li. Peking University

On the Scalability in Cooperative Control. Zhongkui Li. Peking University On the Scalability in Cooperative Control Zhongkui Li Email: zhongkli@pku.edu.cn Peking University June 25, 2016 Zhongkui Li (PKU) Scalability June 25, 2016 1 / 28 Background Cooperative control is to

More information

On the convergence to saddle points of concave-convex functions, the gradient method and emergence of oscillations

On the convergence to saddle points of concave-convex functions, the gradient method and emergence of oscillations 53rd IEEE Conference on Decision and Control December 15-17, 214. Los Angeles, California, USA On the convergence to saddle points of concave-convex functions, the gradient method and emergence of oscillations

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information