Optimal consensus and opinion dynamics

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Optimal consensus and opinion dynamics"

Transcription

1 DEGREE PROJECT IN MATHEMATICS, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2016 Optimal consensus and opinion dynamics OTHMANE MAZHAR KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

2

3 Optimal consensus and opinion dynamics OTHMANE MAZHAR Master s Thesis in Optimization and Systems Theory (30 ECTS credits) Master Programme in Applied and Computational Mathematics (120 credits) Royal Institute of Technology year 2016 Supervisor at KTH: Xiaoming Hu Examiner: Xiaoming Hu TRITA-MAT-E 2016:67 ISRN-KTH/MAT/E--16/67--SE Royal Institute of Technology SCI School of Engineering Sciences KTH SCI SE Stockholm, Sweden URL:

4

5 Abstract In the following thesis we study the influence of the communication graph on the behavior of multi-agent systems. Specifically we investigate two issues the first is concerned with the existence of consensus control for linear dynamics, the second is a study of the behavior of a nonlinear dynamical related to opinion dynamics. For the finite optimal consensus problem of multi-agent system, we formulate the problem as an optimization problem on a Hilbert space to model the graph neighborhood constraints. Then we show that completeness of the graph is a necessary and sufficient condition of the existence of a finite time linear control that guarantees consensus in finite time. As a extension of this result we show that the optimal control we get is also optimal among the larger class of non linear control, and that it can be implemented as optimal for connected but not complete graphs if we replace the neighborhood restriction by a feedback control using the information of all edges of the graph. The second part is a study of a modified version of continuous opinion dynamic model by introduced Hegselmann an Krause. To modify the model we introduce stubborn agents; agent whose opinion do not change over time, specifically we introduce two types of agents: one that can influence the whole distribution at ones and we call it of positive influence and the other with a bounded influence and we call it of non negative influence. For each type introduced we study the topological properties of the distribution and the clustering phenomena observed but also the statistical properties and we do so in the presence of one or two stubborn agent. We finally end this part by two possible applications of the use of stubborn agents for reaching consensus or tracking trajectories.

6

7 Abstract I denna avhandling studerar vi inflytandet av kommunikationsgrafens beteende hos multi agent system. Vi undersöker särskilt två frågor, den första handlar om existensen av konsensus styrlag för linjär dynamik och den andra studerar beteendet hos ickelinjär dynamik relaterad till opinionsdynamik. För det ändliga optimala konsensus problemet hos multi agent system formuleras det som ett optimeringsproblem i ett Hilbert rum för att modellera grafens grannvillkor. Vi visar att kompletthet hos grafen är ett nödvändigt och tillräckligt villkor för existens av en ändlig linjärtidsstyrlag som garanterar konsensus i ändlig tid. Som en utvidgning av detta resultat visar vi att den optimala styrlagen även är optimal i en större klass av ickelinjär reglering och att den kan implementeras som optimal för slutna men ej kompletta grafer om vi ersätter grannvillkoren med en återkopplande styrlag genom att använda informationen hos all kanter i grafen. Den andra delen av denna studie är en modifierad version av den kontinuerliga åsiktsdynamikmodellen som introducerades av Hegselmann och Krause. För att modifiera modellen introducerade vi envisa agenter, agenter vars åsikter ej ändrar över tiden, mer specifikt, vi introducerar två typer av agenter, en som kan påverka hela fördelningen på en gång och vi kallar detta positiv influens and den andra har ett begränsat inflytande och vi kallar denna icke negative influens. För varje introducerad typ studerar vi de topologiska egenskaperna av fördelningen och klustringsfenomen observerades men även de statisitiska egenskaperna och vi gjorde detta vid närvaro av en eller två envisa agenter. Vi avslutade denna del med två möjliga applikationer av envisa agenter för att nå konsensus eller följa banor.

8

9 Contents 1 General introduction 2 2 Optimal consensus for a linear multi-agents system in finite time Preliminary Graph theory notations Elements of optimization in Hilbert spaces Projection matrices Problem statement The consensus problem model The consensus problem in a Hilbert space A linear optimal control problem Solution of the problem Extensions Conclusion Opinion dynamics in the presence of stubborn agent Model presentation The behavior with a positive influence The effect of one stubborn static agent The effect of two stubborn static agents The behavior with a non negative influence The effect of one stubborn static agent The effect of two stubborn static agents Application: HK consensus control by a stubborn agent Conclusion

10

11 Chapter 1 General introduction This master thesis consists of study of multi-agent systems where we study conditions under which we can find a consensus control for linear dynamics, but also the behavior of simple nonlinear dynamics such as the so called Hegselmann-Krause bounded confidence model. In the first part we address the finite optimal consensus problem of a linear time invariant multi-agent system with the graph topology constraints. Work on this area has been done in [3, 10] for a homogeneous system of agents with linear dynamics, both in finite time where we require consensus at a fix T and infinity time where we want an asymptotic consensus. In their papers [1, 2] Cao, Y. and Ren, W. addressed the optimal consensus problem for systems of mobile agents with single-integrator dynamics. In this setting, the authors constrain the agents to use only relative information in their controllers. In this setting the authors show that the graph Laplacian matrix used in the optimal controller for the system corresponds to a complete directed graph. Another line of research on the optimal consensus problem has been taken by Semsar-Kazerooni, E. and Khorasani, K. in [5] in which the consensus requirement is imposed by the cost function. However, with such a formulation the optimal controller in general can not be implemented with relative state information only. Important results for the finite time case also known as rendezvous problem has been provided by Thunberg, J. and Hu, X. in [8] where it is shown that for a homogeneous systems of agents with linear dynamics no linear, time-invariant feedback control law based on relative information of the state can guaranty consensus in finite time. By relative information, we mean using only the pairwise differences between states of the pairs of agents if the two agents communicate. They also show that a time-varying optimal output feedback control using the relative information only exists when the communication graph is complete, and that this control can be obtained from the solution of the problem no topology constraint. The general difficulty for finding a consensus reaching control that depends only on relative information respecting the graph topology, is that there is no general characterization of the set of such controller. To avoid this issue we formulate the consensus problem as an optimal control problem, and we use functional analysis techniques to impose such constraints. Thus formulating the problem as a minimum norm problem in a Hilbert space, and we show that for these kind of problems Lagrange Multipliers conditions are necessary and sufficient to guaranty optimality. Then by restricting ourselves to the class of linear time varying control we notice that graph topology constraint can be fulfilled 2

12 by imposing structural restriction on the control derivative which can be taken account of if we use Lagrange Multipliers. Solving the Lagrange Multipliers conditions leave us with two results, in one hand the we get a general formula for the consensus reaching optimal control, on the other hand we get algebraic condition on the graph topology that we use to determine for which graphs we can find an consensus reaching control. In the second part we look to variants of the Hegselmann and krause opinion dynamics with bounded confidence. The opinion dynamic model presented here is about opinion compromise of different agents. These kind of models were introduced by Hegselman and Krause on their original study [1] to capture the interaction that arise between them. The general difficulty in analyzing these models comes from the state dependence topology. In their work [7, 8] V.D. Blondel and J.M. Hendrickx and J.N. Tsitsiklis address these difficulties by showing convergence properties for both the discrete and continuous time bounded influence HK model. Here we also assume continuous opinions, and that all agents have bounded confidence in such a way that they only consider opinions that are close to them in their dynamics. This model lead to clustering of opinions and a natural that we investigate is how agents that decide to not change their opinions will influence the entire dynamic. In this paper we will consider the model of Hegselmann and Krause with the introduction of various type of the so called stubborn agents. The previous study of the HK opinion dynamic model shows that even-though we get a clustering phenomena not all initial positions that start with a connected graph of interaction will result in an asymptotic consensus such as in [2, 3, 5]. In fact discontinuities in the graph topology take place and cannot be reversed during the interaction and the graph becomes disconnected, that is one of the challenges that having a dynamics that depend on the states differences give us. The loss of connectivity can yield several clusters of agent opinions in different positions, and if a stubborn agent wants to steer the distribution as a whole to some opinion of consensus the he should influence the different clusters. However, even under these simplistic hypothesis of one-dimensional continuous time opinions, few theoretical results exist to relate the initial distribution to the final clusters and their size but also to control the final result. Although the HK opinion dynamic model can be easily extended to higher dimensional spaces and that some of our results are easily verified in this more general setting, in this part we are mainly interested in the one dimensional continuous case. On the other hand, we try to make the model richer by the introduction of new agent whose opinions are not influenced by the interaction, called stubborn. The modified model that we provide in this study guarantees opinion consensus and steering to some common value for almost any initial opinion distribution provided that the region of influence of the stubborn agent is wide enough, gives a way to model more social phenomena like the existence social lobbies and the partition of the population into a left a right and middle and finally provide a device for trajectory taking in this simple model. The remainder of this study is structured as follows: we start by recalling relevant results on the standard HK opinion dynamic model. We formulate introduce two new type of stubborn agents, one type having the ability to influence the whole distribution and the other of bounded influence. For each one of these we study different scenarios with one or two stubborn agents and with static and dynamic opinions. Finally, we show how these agents make trajectory tracking possible as an application. 3

13 Chapter 2 Optimal consensus for a linear multi-agents system in finite time In this part we address the finite optimal consensus problem of a linear time invariant multi-agent system with the graph topology constraints. The general difficulty for finding a consensus reaching control for these kind of system that depends only on relative information, is that there is no general characterization of the set of such control. To avoid this we start by formulating the consensus problem as an optimal control problem, and we transform the graph topology constraint to analytic constraint, Thus formulating the problem as a minimum norm problem in a Hilbert space. We show that for these kind of problems Lagrange Multipliers conditions are necessary and sufficient characterization of the optimal solution. By restricting ourselves to the class of linear time varying control we notice that graph topology constraint can be fulfilled by imposing certain structure on the gradient of the control which is the only requirement for using Lagrange Multipliers. The solution to the Lagrange Multipliers system of equations give us, in one hand a general formula for the consensus reaching optimal control, on the other an algebraic condition to be fulfilled by the graph topology if one hope to find an consensus reaching control. 2.1 Preliminary We first start by recalling some useful definitions and results from graph theory, functional analysis and linear algebra, and establish some properties that will be used latter on Graph theory notations A undirected graph G consists of a set of vertices, or nodes, denoted V, and a set of edges Λ V 2, where a = (v, w) = (w, v) Λ and v, w Λ. If every possible arc exists, the graph is said to be complete or totally connected. A path on G of length N from v 0 to v N is an ordered set of distinct vertices {v 0, v 1,, v N } such that (v i 1, v i ) Λ for all i [1, N]. If a path exists from every v i to every v j the graph is said to be connected otherwise it is disconnected. 4

14 A path on G of length N from v 0 to v N is an ordered set of distinct vertices {v 0, v i,, v N } such that (v i 1, v i ) Λ for all i 1 : N. The adjacency matrix A of a graph G is a square matrix of size V number of vertices, defined by A ij = 1 if (v i, v j ) Λ, and zero otherwise. Note that A is uniquely defined by the graph up to a permutation depending on the enumeration similarity of the vertices. From the adjacency matrix A we define the Laplacien of the graph L as follow: Let D be the matrix with the out-degree of each vertex along the diagonal. The Laplacian of the graph is defined as L = D A and the normalized Laplacian as L = D 1 (D A) where D 1 is the diagonal matrix of inverse of out-degrees with a zero for each node with out-degree zero. Figure 2.1: A drawing of tree labeled undirected graphs G1, G2 and G3. Example. The drawing of Figure: 2.1 show three undirected graphs G1, G2 and G3. The graph G1 is complete since the number of edges is maximal. For graph G2 and G3 are connected but not complete, let G2 = (V 2, Γ 2 ) where V 2 = {1, 2, 3, 4} the set of vertexes of G2 and Γ 2 = {(1, 2), (1, 4), (2, 4), (2, 3), (3, 4)} the set of edges of G2, then the Laplacian L 2 and normalized Laplacian M 2 of G2 are: L 2 = and M =

15 2.1.2 Elements of optimization in Hilbert spaces Of interest to us are minimum norm problem with respect to a linear variety in Hilbert spaces. Definition 1. A set H is a called real Hilbert space if H is a R vector space, together with a real functional (.,.) on H 2 with the following properties: (x, x) 0 (x, x) = 0 if and only if x = 0 (x + y, z) = (x, z) + (y, z) for all x, y, z H (λx, y) = λ(x, y) for all λ R and x, y H (x, y) = (y, x) for all x, y H H is complete Proposition The Hilbert space H is a banach space with norm x = (x, x) Example. Two classical Hilbert spaces that we work with here are L 2 (R d, λ) the set of square integrable functions on R d with the Lebesgue measure, and (R l,. 2 ) the l dimensional euclidean space. One important theorem in Hilbert spaces and that will be important for the us is the projection theorem. Theorem (The projection theorem). Let M be a closed convex set in a Hilbert space H for every x 0 H there exists a unique point y 0 M such that x 0 y 0 = inf x M x 0 y 0 furthermore a necessary and sufficient condition for y 0 to be the unique minimizing vector is that (x 0 y 0, y y 0 ) 0 for all y M. We will use this result for the case where V is a closed linear variety. Corollary Let V be a closed linear variety in a Hilbert space such that V = x 0 + M where M is a closed subspace of H. there is a unique y 0 V of minimum norm. furthermore a necessary and sufficient condition for y 0 to be the unique minimizing vector is that (y, y 0 ) = 0 for all y M. Another important result in optimization theory, although it establish only necessary conditions for optimality is the Lagrange Multipliers theorem. 6

16 Definition 2. Let F be a continuously differentiable function from an open set D in a Banach space X into a Banach space Y. If x 0 D is such that F (x 0 ) maps X onto Y, the point x 0 is said to be a regular point of the function F. Theorem (Lagrange multiplier). If the continuously differentiable functional f has a local extremum under the constraint H(x) = 0 at the regular point x 0. We define the Lagrangian as: L(x, z 0 ) = f(x) + z 0 H(x) then there exists an element z 0 Z such x 0 is a stationary point of L i.e x L(x 0, z 0 ) = 0. Z is the dual space of Z and is identified with Z in the case of a Hilbert space. For the minimum norm problems we can establish an equivalence between Lagrange duality and the projection theorem. Theorem Let A linear operator from X to Y Hilbert spaces and b Y. we define f(x) = x 2, H(x) = Ax b and L(x, z) = f(x) + zh(x). if b 0 and {x/ax b = 0}. Then the minimum norm problem has a unique solution, moreover, x 0 solve the minimum norm problem (P ) : minimize{ x 2 /Ax b = 0} if and only if x 0 V and exists z 0 such that x L(x 0, z 0 ) = 0. Proof. Let M = ker(a), then M is a close subspace: closed since M is the inverse image of {0} and A is continuous and linear since A is linear. Let y 0 {x/ax b = 0}, y 0 is then different from 0 since b 0, hence V = y 0 + M is closed linear variety.by the projection theorem, there is a unique x 0 V of minimum norm. The only if part follows directly from Lagrance multiplier theorem: Since the minimum norm problem has a solution x 0 0 as b 0 and x (x, x)h = 2(x, h) is onto except in 0 hence x 0 is a regular point. By Lagrange multiplier theorem there exists z 0 such that L(x 0, z 0 ) is stationary. The if part: Suppose x 0 V is a stationary point of the Lagrangian for some z 0 then: x L(x 0, z 0 ) = 0 2(x 0, h) + (z 0, Ah) = 0 h X 2(x 0, h) + (A z 0, h) = 0 h X where A is the adjoint operator of A (2x 0 + A z 0, h) = 0 h X 2x 0 + A z 0 = 0 x 0 = 1 2 A z 0 then for all x M = ker(a) we get: (x 0, x) = ( 1 2 A z 0, x) = ( 1 2 z 0, Ax) = 0 Hence x 0 is orthogonal to M. By the projection theorem, x 0 is the optimal solution. 7

17 Lemma if α(t) and β(t) are continuous in [ T ] and [α(t)h(t) + β(t)ḣ(t)]dt = 0 for every h C 1 [ T ] with h( ) = h(t ) = 0, then β is differentiable and β(t) = α(t) in [ T ] Projection matrices An application of the minimum norm theory developed previously in the Euclidean vector space R n is the minimum norm vector to a subspace. Here we illustrate this problem, present the solution and establish its equivalence to the solution of the orthogonal projection problem. which lead us to introducing the concept of projection matrices. Problem 2.1. Let b R n and a 1, a 2,, a k independent vectors of R n, we want to find p span{a 1, a 2,, a k } such that b p orthogonal to span{a 1, a 2,, a k }. p is a solution to this problem if for all i 1 : k, a T i (b p) = 0, which is equivalent to A T (b p) = 0 where A = [a 1 a 2 a k ]. But since p span{a 1, a 2,, a k } and a 1, a 2,, a k independent then there are unique x 1, x 2,, x k such that p = x 1 a 1 + x 2 a x k a k = Ax where x = [x 1, x 2,, x k ] T. Then we get: A T (b p) = 0 A T Ax = A T b x = (A T A) 1 A T b p = A(A T A) 1 A T b (A T A) 1 exists since a 1, a 2,, a k are independent then A and A T A are full column rank. we call projection matrix P = A(A T A) 1 A T hence the solution of our problem is p = P b. Next we look to a related result that will be shown to be equivalent, the solution to the error minimization in R n also known as least square problem. In this problem we are interested in finding p range(a) such that it is the closest to b. Problem 2.2. the least square problem with respect the range of A is defined by the following set of equations: a direct computation shows that: d b Ax 2 dt minimize b Ax 2 p = Ax = 2(A T b A T Ax) = 0 then x = (A T A) 1 A T b and p = P b = A(A T A) 1 A T b. The following proposition summarize properties of projection matrices. 8

18 Proposition Let P be a projection matrix onto span{a 1, a 2,, a k } then for all x R n and v span{a 1, a 2,, a k } we have the following: 1. rang(p ) = span{a 1, a 2,, a k } 2. P T = P 3. P 2 P = 0 4. P x x 5. v P x v x 2.2 Problem statement In this section we consider the problem of finding a consensus reaching optimal control for a linear multi-agents system in finite time The consensus problem model We consider a l agent modeled by linear time invariant systems {x 1, x 2,, x l } with the corresponding control {u 1, u 2,, u l }, such that: x i = Ax i + Bu i for all i 1 : l x i ( ) = x 0 i where x 0 i is the initial position of the agent i at time. We say that a consensus is reached in finite time T if for t = T we get: x 1 (T ) = x 2 (T ) = = x l (T ) The relative information in i with respect to a neighboring node j is z ij = x i x j. The control is said to be of relative information if for every agent x i the control u i is only a function of z ij for all j N i where N i is the set of neighboring agents is the communication graph G, and we write u i ((z ij ) j Ni ) or u i ((x i x j ) j Ni ). The control energy for one of the agents is given by l i=1 u i 2 dt. u 2 dt and for the hole system is 9

19 Problem 2.3. Our consensus reaching optimal control problem is then modeled by the following set of equations: minimize subject to : l i=1 u i 2 dt x i = Ax i + Bu i i 1 : l (2.2.1) x 1 (T ) = x 2 (T ) = = x l (T ) (2.2.2) x i ( ) = x 0 i i 1 : l u i ((z ij ) j Ni ) i 1 : l (2.2.3) The consensus problem in a Hilbert space we define the state variable x and the control u as: x 1 u 1 x 2 x =. and u = u 2. x l u l This problem can be stated as a minimum norm problem in a Hilbert space. Proposition H 1 the set of all controls u and H 2 the set of controls u depending only on relative information u i = (z ij ) j Ni are Hilbert spaces. Proof. H 1 is a Hilbert space since it can be identified with ((L 2 (R d, λ)) l,. 2 ) the composition of the two Hilbert spaces L 2 (R d, λ) and (R l,. 2 ). H 2 is a Hilbert space as a closed subspace of H 1. Equation can be rewritten more compactly as ẋ = I Ax+I Bu in an integral form this equation becomes x(t) x 0 I Ax + I Budt = 0 as for equation can be rewritten as a linear system of equation 1 D Ix = 0 with ker(d) = 1. 1 Hence we we get the following minimum norm problem in a Hilbert space. 10

20 Problem 2.4. minimize subject to : l i=1 u i 2 dt x(t) x 0 D Ix = 0 x( ) = x 0 I Ax + I Budt = 0 u H A linear optimal control problem A standard solution to this problem would be to use the projection theorem, but this approach meets some difficulties due to the absence of an explicit characterization of the element of H 2. Instead we start by making the following observation, if an optimal control u that depend only on relative information is to be found then: du i = j N i zij u i dz ij = j N i xi u i (dx i dx j ) denote xi u i = K i and n i = card(n i ) and K = diag(k 1, K 2,, K l ) then with L the Laplacien of the graph. n 1 K 1 K = K(L I) K l n l K l hence du = K(L I)dx we restrict ourselves to the space of linear control u = K(L I)x a closed subspace of H 2, and define our problem in this space as follow. Problem 2.5. minimize subject to : x T (L T IK T KL I)xdt x(t) x 0 D Ix = 0 x( ) = x 0 (I A + (I B)K(L I))xdt = 0 11

21 2.3 Solution of the problem Since we established that Lagrange multipliers conditions are equivalent to the projection theorem for minimum norm problems in Hilbert spaces, we will use them to find necessary and sufficient conditions for optimality. we start by defining the following functions and their derivatives. denote: l(x, k) = x T (L T IK T KL I)x f(x, k) = (I A + (I B)K(L I))x G(x) = D Ix Proposition for all h and v we have: x l(x, k)h = 2x T (L T IK T KL I)h (2.3.1) k l(x, k)v = 2x T (L T IK T vl I)x (2.3.2) x f(x, k)h = (I A + (I B)K(L I))h k f(x, k)v = (I B)v(L I))x x G(x)h = D Ih Proof. we prove and 2.3.2, the others can be obtained in a similar way. for all h we have: l(x + h, k) l(x, k) = (x + h) T (L T IK T KL I)(x + h) x T (L T IK T KL I)x = h T (L T IK T KL I)x + x T (L T IK T KL I)h + h T (L T IK T KL I)h = 2x T (L T IK T KL I)h + o( h 2 ) hence: x l(x, k)h = 2x T (L T IK T KL I)h l(x, k + v) l(x, k) = x T (L T I(K + v) T (K + v)l I)x x T (L T IK T KL I)x = x T (L T IK T vl I)x + x T (L T Iv T KL I)x + x T (L T Iv T vl I)x = 2x T (L T IK T vl I)x + o( v 2 ) hence: k l(x, k)v = 2x T (L T IK T vl I)x Next we use Lagrange multiplier theorem to move from an optimization problem to a system of differential equations. Lemma K solve problem 3 if and only if it the following set of differential equations has a solution: λ = (I A T + (L T I)K T (B T I))λ + 2(L T IK T KL I)x (2.3.3) λ(t ) = (D T I)µ (2.3.4) I B T λ = 2K(L I)x (2.3.5) ẋ = (I A + (I B)K(L I))x (2.3.6) x( ) = x 0 (2.3.7) (D I)x(T ) = 0 (2.3.8) 12

22 Proof. using lagrange multipliers theorem on problem 3 we get: we start by 2.3.9: λ BV dl [ T ] and µ R dl such that for all h and v x lhdt + k lvdt dλ T [h dλ T 2x T (L T IK T KL I)hdt + t t x fhdτ] + µ T x Gh(T ) = 0 (2.3.9) k fvdτ = 0 (2.3.10) dλ T [h t +µ T D Ih(T ) = 0 (I A + (I B)K(L I))hdτ] without loss of generality we may take λ(t ) = 0, and integrating by parts the third term we get. 2x T (L T IK T KL I)hdt + dλ T h + dλ T (I A + (I B)K(L I))hdτ] +µ T D Ih(T ) = 0 λ can t have jumps on [ T ) since otherwise h that makes the second term larger then the rest. To account for the last term there must be a jump at T hence: λ(t ) = D T Iµ integrating the second therm by parts we get: 2x T (L T IK T KL I)hdt λ T ḣ + λ T (I A + (I B)K(L I))h dt = 0 since h is arbitrary we have: λ = (I A T + (L T I)K T (B T I))λ + 2(L T IK T KL I)x λ(t ) = (D T I)µ equation on the other hand gives: t 2x T (L T IK T vl I)xdt dλ T (I B)v(L I))xdτ = 0 integrating the second therm by parts we get: 2x T (L T IK T vl I)xdt + since it must be satisfied for all v the we must have: I B T λ = 2K(L I)x λ T (I B)v(L I))xdt = 0 hence we get the following 3 Lagrange multipliers conditions: λ = (I A T + (L T I)K T (B T I))λ + 2(L T IK T KL I)x λ(t ) = (D T I)µ I B T λ = 2K(L I)x 13

23 to get necessary and sufficient optimality conditions we must add the 3 initial constraints thus proving the Lemma: ẋ = (I A + (I B)K(L I))x x( ) = x 0 (D I)x(T ) = 0 Lemma K solve problem 3 for some connected graph G with a Laplacien matrix L if and only if there is D such that: 1 ker(d) = 1. and M = DT (DD T ) 1 D 1 and we have: K 1 = K 2 = = K l = B T exp(a T (T t))w 1 (t, T )exp(a(t t)) Proof. starting from and we get: hence: from we have: back to we get the following identity: and becomes: λ = (I A T )λ λ = I exp(a T (T t))λ(t ) λ = D T exp(a T (T t))µ K(L I)x = 1 2 DT B T exp(a T (T t))µ ẋ I Ax = 1 2 DT BB T exp(a T (T t))µ multiplying both sides by I exp(a(t t) and integrating from to T : x(t ) I exp(t )x( ) = 1 2 DT we define W (, T ) = to get: exp(a(t t))bb T exp(a T (T t))dtµ exp(a(t t))bb T exp(a T (T t))dt the Reachability Gramien x(t ) I exp(t )x( ) = 1 2 DT W (, T )µ multiplying by D I and using we have: D exp(a(t ))x( ) = 1 2 DDT W (, T )µ 14

24 solving for µ we should have: µ = 2(DD T ) 1 D W 1 (, T )exp(a(t ))x( ) then we can get: u( ) = K( )(L I)x( ) = 1 2 DT B T exp(a T (T ))µ = D T (DD T ) 1 D B T exp(a T (T ))W 1 (, T )exp(a(t ))x( ) by the dynamic programming principle we must have for all t: u = K(L I)x = D T (DD T ) 1 D B T exp(a T (T t))w 1 (t, T )exp(a(t t))x = ( I B T exp(a T (T t))w 1 (t, T )exp(a(t t)))(d T (DD T ) 1 D I)x then we must have: K 1 = K 2 = = K l = B T exp(a T (T t))w 1 (t, T )exp(a(t t)) L = D T (DD T ) 1 D Remark. To connect this result to the Linear Quadratic Control problem and get an interpretation for why the Reachability Gramien W appears in the the control, we notice that would be satisfied if there is a matrix g such that: K = 1 2 (I BT )(I g) = 1 2 (I BT g) λ = (I g)(l I)x = (L g)x from and we would get: λ = (I A T )λ (L ġ)x + (L g)ẋ = (I A T )(L g)x (L ġ)x + (L g)(i A + (I B)K(L I))x = (L A T g)x (L ġ)x + (L g)(i A 1 2 (L BBT g)x = (L A T g)x (L ġ)x + (L ga) + (L A T g)x 1 2 (LL gbbt g)x = 0 if L is chosen to be the normalized Laplacien for the complete graph then LL = L and since the last equality must hold for all x we would get g solving the following Riccati Equation: ġ + ga + A T g 1 2 gbbt g = 0 15

25 if we take g = 1 2 exp(at (T t))w 1 (t, T )exp(a(t t)) then we have: ġ = A T g + ga 1 2 exp(at (T t))w 1 Ẇ W 1 (t, T )exp(a(t t)) = A T g + ga 1 2 exp(at (T t))w 1 exp(a(t t))bb T exp(a T (T t) W 1 (t, T )exp(a(t t)) = A T g + ga 2gBB T g then g solve the equation and the optimal control is: u = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x we show next that this case is indeed the only one to consider. 1 Lemma Let D 1 and D 2 such that ker(d 1 ) = ker(d 2 ) = 1. then: 1 D1 T (D 1 D1 T ) 1 D 1 = D2 T (D 2 D2 T ) 1 D 2 1 Proof. denote e = since we have: 1. 1 D T (DD T ) 1 DD T (DD T ) 1 D = D T (DD T ) 1 (DD T )(DD T ) 1 D = D T (DD T ) 1 D then P (X) = X 2 X = X(X 1) is the minimum polynomial of D T (DD T ) 1 D and D T (DD T ) 1 D has only 0 and 1 as eigenvalues. Since ker(d) = e and D is l 1 by l matrix, then D has independent rows and D T (DD T ) 1 D is a l by l matrix with one eigenvalue 0 and l 1 eigenvalue of 1. D T (DD T ) 1 D is also the projection matrix onto V = {e} the orthogonal complement to the subspace generated by e. Let e, v 2, v 3,, v l eigenvectors of D1 T (D 1 D1 T ) 1 D 1 and e, w 2, w 3,, w l eigenvectors of D2 T (D 2 D2 T ) 1 D 2 where e correspond to the zero eigenvalue. Let x R l with x = a v e + l α i v i = a w e + l β i w i. i=2 i=2 Notice that e is orthogonal ot v i and w i for all i 1 : l, indeed for all i 1 : l we have: (e, v i ) = (e, D T 1 (D 1 D T 1 ) 1 D 1 v i ) = (D T 1 (D 1 D T 1 ) 1 D 1 e, v i ) = 0 and the same holds for w i. Thus, (x, e) = a v (e, e) = a w (e, e) and a v = a w = a. l l (D1 T (D 1 D1 T ) 1 D 1 D2 T (D 2 D2 T ) 1 D 2 )x = α i v i β i w i i=2 i=2 = (x ae) (x ae) then D T 1 (D 1 D T 1 ) 1 D 1 = D T 2 (D 2 D T 2 ) 1 D 2 16 = 0

26 now we summarize the results of the last three lemmas in a theorem. Theorem The finite time linear optimal consensus problem has a solution for some connected graph G if and only if G is complete. In this case the consensus reaching optimal control is given by: u = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x Proof. Follows directly from the last three lemmas. Corollary There exits a consensus reaching linear feedback control in finite time for a multi-agents system if and only if the communication graph is complete. Proof. Follows directly from the last theorem. 2.4 Extensions We provide here two extension that highlight other properties of the solution presented previously. For the first extension we consider the same problem as before but without the graph topology constraint and we derive an optimal feedback control for the complete graph as it was done in [8]. In the absence of the neighborhood constraints we can easily derive a solution using the projection theorem. Problem 2.6. Our consensus reaching optimal control problem for a topology free system is then modeled by the following set of equations: minimize subject to : l i=1 u i 2 dt x i = Ax i + Bu i i 1 : l x 1 (T ) = x 2 (T ) = = x l (T ) x i ( ) = x 0 i i 1 : l The next theorem shows that for the complete graph case the linear time varying control obtained previously is optimal even among non linear controls. Theorem For any finite time T and any initial condition x 0 the topology free optimal consensus problem has the following solution: u(t) = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x 0 This can be rewritten as the following optimal feedback: or for each agent: u(x, t) = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( (x k x i )) 17 k=1

27 Proof. In integral form the topology free optimal consensus problem is stated as follow: minimize subject to : l i=1 u i 2 dt exp(a(t ))Bu i dt = exp(a(t ))x T i x 0 i D Ix = 0 x( ) = x 0 The first constraint can be rewritten as: exp(a(t T ))Bu i dt + exp(a( T ))x 0 i = x i (T ) using the second constraint we then get: D exp(a(t T ))Budt + D exp(a( T ))x 0 = D x(t ) = 0 D exp(a( T ))x 0 = D exp(a(t T ))Budt This problem falls into the category of minimum norm problems with respect to a linear variety, using the projection theorem we get the existence of µ such that: This amount to: D exp(a( T ))x 0 = DD T u = D T B T exp(a T (T t))µ D exp(a( T ))x 0 = DD T W (, T )µ µ = (DD T ) 1 D W (, T ) 1 exp(a( T ))x 0 substituting into the formula for u we get: exp(a(t T ))BB T exp(a T (T t))µdt u(t) = D T (DD T )D B T exp(a T (T t))w (, T ) 1 exp(a( T ))x 0 u(t) = L B T exp(a T (T t))w (, T ) 1 exp(a( T ))x 0 The dynamic programming principle gives: u(x, t) = L B T exp(a T (T t))w 1 (t, T )exp(a(t t))x u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( (x k x i )) k=1 Our previous analysis shows that we don t have a linear consensus reaching control under the graph neighborhood constraint. In this next extension we will show that if one relaxes these constraints to a control depending only on the information coming from 18

28 the graph, but where each agent is allowed to use the information of every edges of the graph; by opposition to the previous case where he only uses edges that have his node as one extremity; then we still have a linear optimal control, moreover this control is still optimal for a time varying graph as long as the graph stays connected. Let G(t) a time varying graph, (i j)(t) a directed path from i to j at time t for example the one corresponding to the shortest path and Λ (i j)(t) the set of edges in this path. We define the path information z (i j)(t) = (x k x l ). We define the consensus (l,k) Λ (i j)(t) reaching optimal control problem in time varying connected topology as follows: Problem 2.7. Our optimal control problem for a time varying connected topology is then modeled by the following set of equations: minimize subject to : l i=1 u i 2 dt x i = Ax i + Bu i i 1 : l x 1 (T ) = x 2 (T ) = = x l (T ) x i ( ) = x 0 i i 1 : l u i (x, t) = u i ((z (i,j) ) (i,j) Λ(t), t) Here u i (x, t) = u i ((z (i,j) ) (i,j) Λ(t), t) means that agent i uses for his feedback control u i at time t the information z ( i, j) available from all edges of the graph G(t) at time t i.e (i, j) Λ(t). A solution to this problem can be obtained by rewriting the previous solution in a way that takes into account the graph topology as stated in the next theorem. Theorem For any finite time T and any initial condition x 0 the time varying connected topology consensus problem has the following solution the following optimal feedback for each agent: u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( ( x l x k )) j=1 (k,l) Λ (i j)(t) Proof. let M(t) the set of u(t) = [u 1 (t), u 2 (t),, u N (t)] that satisfy the constraint of Problem: 2.7 and M tot the set of u(t) that satisfy the constraint of Problem: 2.6. It is easy to see that: u i (x, t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( ( x l x k )) j=1 (k,l) Λ (i j)(t) = 1 N N BT exp(a T (T t))w 1 (t, T )exp(a(t t))( (x k x i )) hence u = [u 1, u 2,, u N] solve min{ u T u dt; u M tot }. But since M(t) M tot we get: min{ u T u dt; u M tot } min{ u T u dt; u M(t)} and u is feasible in both problems with same objective value hence u is optimal for problem: 2.7. k=1 19

29 2.5 Conclusion In this part we studied the finite optimal consensus problem of a linear time invariant multi-agent system with the graph topology constraints. The general issue for finding such consensus reaching control depending on relative information only and respecting the topology of the graph, was that there is no clear analytic characterization of the set of such controller. To avoid this issue we formulate the consensus problem as an optimal control problem, which help provide use with general tools from functional analysis to address such constraints. Thus we started by defining the general framework of Hilbert spaces that we will work on, this framework provide us with a powerful characterization of solutions of optimization problem namely the projection theorem. Still for our optimal consensus problem the element of the convex set satisfying the graph topology constraints, even being a linear variety, cannot be provided explicitly. That is way we established an equivalent statement to the projection theorem in the case of a minimum norm problem with respect to a linear variety namely that: a solution to the problem exists if an only if it satisfies Lagrange multipliers conditions. After establishing we restricted our analysis to the set of linear possibly time varying solutions. In this set we notice that the gradient of the solution,if it was to be found, will have the structure of the Laplacien of the communication graph. On the other hand the Lagrange multipliers conditions are stated only in therm of the gradient of the solution in this case, Hence upon solving then we would get a clear characterization of the general solution. These conditions translate to two integral equations that we prove been equivalent to a system of differential equations. Reducing the system give us two results, in one hand the we get a general formula for the consensus reaching optimal control, on the other hand we get algebraic condition on the graph topology that we use to determine for which graphs we can find an consensus reaching control. From there we proved that the only graph satisfying this condition was the complete graph,thus establishing that for the complete graph the problem has a unique solution and for every other graph there is no solution linear ont he state. 20

30 Bibliography [1] Cao, Y. and Ren, W. (2009), LQR-based optimal linear consensus algorithms, in American Control Conference, IEEE, pp [2] Cao, Y. and Ren, W. (2010), Optimal linear-consensus algorithms: an LQRperspective, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 40(3), [3] Kim, H., Shim, H. and Seo, J. (2011), Output consensus of heterogeneous uncertain linear multi-agent systems, IEEE Transactions on Automatic Control 56(1), [4] Luenberger, D. (1997), Optimization by vector space methods, John Wiley and Sons, Inc. [5] Semsar-Kazerooni, E. and Khorasani, K. (2008),Optimal consensus algorithms for cooperative team of agents subject to partial information, Automatica 44(11), [6] Sundram, S. and Hadjicostis, C. (2007), Finite-time distributed consensus in graphs with time-invariant topologies, American Control Conference. [7] Thunberg, J. (2014), Consensus and Pursuit-Evasion in Nonlinear Multi-Agent Systems, PhD thesis, KTH Royal Institute of Technology. [8] Thunberg, J. and Hu, X. (2015), Optimal output consensus for linear systems: A topology free approach, arxiv preprint. [9] Wang, L. and Xiao, F. (2010),textitFinite-time consensus problems for networks of dynamic agents, IEEE Transactions on Automatic Control. [10] Xi, J. Shi, Z. and Zhong, Y. (2012a), Output consensus analysis and design for highorder linear swarm systems: partial stability method, Automatica 48(9), [11] Xi, J., Shi, Z. and Zhong, Y. (2012b),Output consensus for high-order linear timeinvariant swarm systems, International Journal of Control 85(4),

31 Chapter 3 Opinion dynamics in the presence of stubborn agent In this part we study variants of the Hegselmann and Krause opinion dynamics model in continuous time with bounded confidence. we are mainly interested in the one dimensional continuous case, although the HK opinion dynamic model can be easily extended to higher dimensional spaces and that some of our results are easily verified in this more general setting. On the other hand, we try to extend this model by introducing new agents, called stubborn, whose opinions are not influenced by the interaction. These modified models that we introduce in this section guarantees opinion consensus and steering to some common value for almost any initial distribution of the opinion provided that the region of influence of the stubborn agent is sufficiently large. This study is structured as follows: we start by recalling relevant results on the standard HK opinion dynamic model. We introduce two new type of stubborn agents, the first type having an infinite radius of influence and thus the ability to influence the whole distribution and the second of bounded influence and thus its action on the distribution is local. For each of these type of agents we study different models with one or two stubborn agents and with static and dynamic behavior. Finally, we show how these agents make trajectory tracking possible as an application. 3.1 Model presentation In this part we introduce the continuous time Hegselmann and Krause bounded confidence model for opinion dynamics in the presence of stubborn agents, the Hegselmann and Krause opinion dynamics model is a well known simple model within the field of opinion dynamics where every agent is willing to compromise and changes his opinion according to the average opinion of the agents whose opinions are sufficiently close to his own. Latter on we will introduce to the model other agents said stubborn. The stubborn agents can be viewed as agents unwilling to compromise thus keeping a constant opinion over time, or changing their opinions according to their own agenda while disregarding the others. They can also be viewed as a control signal used to influence the behavior of the system. We study here the effects of the introduction of stubborn agents driven by both constant 22

32 and time varying control on the asymptotic behavior of the initial distribution of opinions. Then we study the possibility of controlling the whole distribution to get certain behavior. Formally speaking, recall first the initial continuous time Hegselmann and Krause model of bounded confidence. Consider N agents i = 1, 2,, N each having it s own opinion represented by his state x i (t) where i 1 : N and t [0 ), the interaction between one and the others is described by the following average preserving dynamic, for all i 1 : N we have: x i = (x j x i ) j { x i x j R} Where R is the radius of the interaction, noting the set N i (t) = {j { x i (t) x j (t) R}} we get in integral form: x i (t) = x 0 i + t 0 j N i (t) (x j x i )dt Numerical simulations such as Figure:3.1 show that the system converges to clusters inside which all agents share a common opinion. Different clusters lie at a distance of at least R from each other, and often approximately 2R refereed to as the 2R conjecture in [11]. This model was much studied due to its simple formulation, and due to these peculiar behaviors that it exhibits. For most of our simulations we have used variants of the following matlab code. Matlab Code 1 c l e a r a l l, c l o s e a l l, c l c 2 3 tend =1; %s i m u l a t i o n end time 4 tspan =[0 tend ] ; %s i m u l a t i o n time i n t e r v a l 5 L=8; %range o f o p i n i o n s 6 d =.9; %r a d i u s o f i n t e r a c t i o n 7 8 [ t, x]=normal HK ( tspan, d, L, tend ) ; 9 p l o t ( t, x ) %p l o t the r e s u l t s f u n c t i o n [ t, x]=normal HK ( tspan, d, L, tend ) x0 = 0 :. 1 : L ; %equaly spaced o p i n i o n s 14 [ t, x]= ode45 ( normal HK rhs, tspan, x0, [ ], d, L, tend ) ; f u n c t i o n x=normal HK rhs ( t, x,dummy, d, L, tend ) 17 y=x ; %copy o f the o p i n i o n s 18 l y=length ( y ) ; %number o f o p i n i o s 19 f o r i =1: l y 20 z=y kron ( ones ( ly, 1 ), y ( i ) ) ;%r e l a t i f d i f f e r e n c e s 21 l=f i n d ( abs ( z )<d) ; %f i n d the neighbores 22 t=length ( l ) ; %number o f neighbors 23 i f t==0 23

33 24 x ( i ) =0; %update i f no neighbor 25 e l s e x ( i )=sum( z ( l ) ) ; %update i f with neighbors 26 end 27 end The uncontrolled dynamics of standard HK model is governed by local interactions that leads to the formation of clusters as an asymptotic behavior. From the mathematical point of view, clusters are a stable configuration for the system, one effect of adding stubborn agents will be to influence the position of these clusters and possibility to steer all agents to a unique opinion reaching a consensus. The following definitions will be use throughout this part to describe the asymptotic behavior of the system. Definition 3. let x(t) be a solution of the HK dynamic of N agents, we have the following definitions: 1. x is a stable equilibrium of the system x(t) governed by the HK dynamics if for all i j either x i = x j or x i x j > R. 2. F is the set of possible equilibrium if F is a subset of R N and for all y R N, y F implies y is a stable equilibrium. 3. we call opinion or stable opinion x i = lim t x(t) if the limit exists. 4. we call a cluster the set of agents sharing the same opinion. 5. a configuration x is a consensus if x is a stable equilibrium and x 1 = x 2 = = x N. The standard HK model will serve us as a benchmark for studying the new models introducing stubborn agents and thus it will be interesting to recall some of its properties. The average preserving property of the model can be seen by computing the mean: x = 1 N = 1 N = x 0 N x i i=1 N t N i=1 x 0 i + 1 N 0 i=1 j N i (t) (x j x i )dt since N i=1 j N i (t) (x j x i ) = 0 as if j N i then i N j. Hence the average preserving property of this continuous time opinion dynamic model. The variance on the other hand for this model, as shown in Figure: 3.2, can be shown as a decreasing function of 24

34 Figure 3.1: Simulation of the standard HK model with equally spaced agents with an inter-distance of 0.1 a radius of interaction of 0.9, the range of the distribution is 8 and the time of simulation is 1. Asymptotically we observe a clustering phenomena the inter-cluster distance is roughly 2R. time. dv ar(x) dt = 1 d N (x i x) 2 i=1 N dt N = 2 ẋ i (x i x) = 2 i=1 N i=1 ẋ i x i 2n x x = x i (x j x i ) + (i,j) [1:N] N i = (i,j) [1:N] N i (x i x j ) 2 (i,j) N j [1:N] x j (x i x j ) Let F be a subset of R N such that if x F then for all i j either x i = x j or x i x j > 1. Then if x(t) F we get V ar(x) is stationary otherwise it is decreasing. The following theorem summarize some of the properties of the standard HK model, the proof can be seen in [8]. 25

35 Figure 3.2: Variance of the distribution in the standard HK model with equally spaced agents with an inter-distance of 0.3 a radius of interaction of 0.9, the range of the distribution is 6 and the time of simulation is 2. The variance is a non increasing function of the simulation time that converge to a value of approximately 2. Theorem Let x(t) be a solution to the standard HK opinion dynamic model then we have the following properties: 1. the order between the agent is preserved. 2. the opinion of the first is always no decreasing and the last is non increasing. 3. if at some point in time the opinion of two consecutive agents is larger than 1 it remain so forever. 4. the average opinion is preserved and the variance is monotonically non increasing to a constant. 5. x(t) converge to an element x F. Definition 4. we say that a function a(r) is an influence function if a(r) is non increasing non negative and bounded by 1 over [0 ) with lim r a(r) = 0. 26

36 We modify the model by the introduction of new agents said stubborn i = 0 or N + 1 or both, with initial positions x 0 0 and x N+1 0, and with the corresponding controls u 0 (t) and u N+1 (t). These agents can influence the rest of the distribution as follow: ẋ 0 = u 0 ẋ N+1 = u N+1 x i = (x j x i ) + a(r 0,i )(x 0 x i ) + b(r N+1,i )(x N+1 x i ) for all i [1 N] j N i Where r 0,i = x 0 x i, r N+1,i = x N+1 x i for all i 1 : N and a(r) is a non negative non increasing continuous function. In this study we will investigate the effect of having additional agent with various choices of the control function u 0 and u N+1 and influence functions a(r) and b(r). 3.2 The behavior with a positive influence Definition 5. we say that an influence function a(r) is positive if a(r) is positive non increasing continuous with a(0) = 1 and lim r a(r) = 0. The positive influence model is a model in which the influence function is positive, an example of such a function,and the one we shall use in our simulations and some part of this study, is the following exponential influence function: a 1 (r) = exp( r ) Considering the same opinion dynamic model, we start by introducing stubborn agents with positive influence The effect of one stubborn static agent Here we only introduce one stubborn static agent i = 0, static in the sense that it does not change it s opinion, for that we take u 0 = 0. The system then becomes: Model 1. ẋ 0 = 0 x i = (x j x i ) + a(r 0,i )(x 0 x i ) for all i [1 N] j N i For simplicity we take x 0 0 = 0 and for all the other agents x 0 i > 0. For this model Numerical simulations such as in Figure: 3.3 all show the same behavior regardless of the initial conditions or the influence function used, as long as the influence function is positive, in all cases we see a convergence phenomena to one cluster having the opinion of the stubborn agent. This observation lead us to the statement of the following theorem that explain this convergence phenomena and exhibit an exponential rate for it. 27

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Graph and Controller Design for Disturbance Attenuation in Consensus Networks 203 3th International Conference on Control, Automation and Systems (ICCAS 203) Oct. 20-23, 203 in Kimdaejung Convention Center, Gwangju, Korea Graph and Controller Design for Disturbance Attenuation in

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified. PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications

Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications Lecture 4: Introduction to Graph Theory and Consensus Richard M. Murray Caltech Control and Dynamical Systems 16 March 2009 Goals Introduce some motivating cooperative control problems Describe basic concepts

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Consensus Protocols for Networks of Dynamic Agents

Consensus Protocols for Networks of Dynamic Agents Consensus Protocols for Networks of Dynamic Agents Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 e-mail: {olfati,murray}@cds.caltech.edu

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

Automatic Control 2. Nonlinear systems. Prof. Alberto Bemporad. University of Trento. Academic year

Automatic Control 2. Nonlinear systems. Prof. Alberto Bemporad. University of Trento. Academic year Automatic Control 2 Nonlinear systems Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011 1 / 18

More information

On the convergence to saddle points of concave-convex functions, the gradient method and emergence of oscillations

On the convergence to saddle points of concave-convex functions, the gradient method and emergence of oscillations 53rd IEEE Conference on Decision and Control December 15-17, 214. Los Angeles, California, USA On the convergence to saddle points of concave-convex functions, the gradient method and emergence of oscillations

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. VII - System Characteristics: Stability, Controllability, Observability - Jerzy Klamka

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. VII - System Characteristics: Stability, Controllability, Observability - Jerzy Klamka SYSTEM CHARACTERISTICS: STABILITY, CONTROLLABILITY, OBSERVABILITY Jerzy Klamka Institute of Automatic Control, Technical University, Gliwice, Poland Keywords: stability, controllability, observability,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS RALPH HOWARD DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH CAROLINA COLUMBIA, S.C. 29208, USA HOWARD@MATH.SC.EDU Abstract. This is an edited version of a

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

Min-max Time Consensus Tracking Over Directed Trees

Min-max Time Consensus Tracking Over Directed Trees Min-max Time Consensus Tracking Over Directed Trees Ameer K. Mulla, Sujay Bhatt H. R., Deepak U. Patil, and Debraj Chakraborty Abstract In this paper, decentralized feedback control strategies are derived

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

TECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R"

TECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: VoL 50, No. 1, JULY 1986 TECHNICAL NOTE A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R" C. M I C H E L O T I Communicated

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Affine iterations on nonnegative vectors

Affine iterations on nonnegative vectors Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider

More information

Subgradient methods for huge-scale optimization problems

Subgradient methods for huge-scale optimization problems Subgradient methods for huge-scale optimization problems Yurii Nesterov, CORE/INMA (UCL) May 24, 2012 (Edinburgh, Scotland) Yu. Nesterov Subgradient methods for huge-scale problems 1/24 Outline 1 Problems

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Introduction. Chapter 1. Contents. EECS 600 Function Space Methods in System Theory Lecture Notes J. Fessler 1.1

Introduction. Chapter 1. Contents. EECS 600 Function Space Methods in System Theory Lecture Notes J. Fessler 1.1 Chapter 1 Introduction Contents Motivation........................................................ 1.2 Applications (of optimization).............................................. 1.2 Main principles.....................................................

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Invariant Manifolds of Dynamical Systems and an application to Space Exploration

Invariant Manifolds of Dynamical Systems and an application to Space Exploration Invariant Manifolds of Dynamical Systems and an application to Space Exploration Mateo Wirth January 13, 2014 1 Abstract In this paper we go over the basics of stable and unstable manifolds associated

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

Very few Moore Graphs

Very few Moore Graphs Very few Moore Graphs Anurag Bishnoi June 7, 0 Abstract We prove here a well known result in graph theory, originally proved by Hoffman and Singleton, that any non-trivial Moore graph of diameter is regular

More information

Distributed Game Strategy Design with Application to Multi-Agent Formation Control

Distributed Game Strategy Design with Application to Multi-Agent Formation Control 5rd IEEE Conference on Decision and Control December 5-7, 4. Los Angeles, California, USA Distributed Game Strategy Design with Application to Multi-Agent Formation Control Wei Lin, Member, IEEE, Zhihua

More information

Lecture 3: Huge-scale optimization problems

Lecture 3: Huge-scale optimization problems Liege University: Francqui Chair 2011-2012 Lecture 3: Huge-scale optimization problems Yurii Nesterov, CORE/INMA (UCL) March 9, 2012 Yu. Nesterov () Huge-scale optimization problems 1/32March 9, 2012 1

More information

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Jan Maximilian Montenbruck, Mathias Bürger, Frank Allgöwer Abstract We study backstepping controllers

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case

The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case J. Lin 800 Phillips Road MS:0128-30E Webster, NY 14580-90701 jie.lin@xeroxlabs.com 585-422-4305 A. S. Morse PO Box 208267 Yale University

More information

Exponential stability of families of linear delay systems

Exponential stability of families of linear delay systems Exponential stability of families of linear delay systems F. Wirth Zentrum für Technomathematik Universität Bremen 28334 Bremen, Germany fabian@math.uni-bremen.de Keywords: Abstract Stability, delay systems,

More information

e j = Ad(f i ) 1 2a ij/a ii

e j = Ad(f i ) 1 2a ij/a ii A characterization of generalized Kac-Moody algebras. J. Algebra 174, 1073-1079 (1995). Richard E. Borcherds, D.P.M.M.S., 16 Mill Lane, Cambridge CB2 1SB, England. Generalized Kac-Moody algebras can be

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

A Distributed Newton Method for Network Utility Maximization, I: Algorithm A Distributed Newton Method for Networ Utility Maximization, I: Algorithm Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract Most existing wors use dual decomposition and first-order

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information

2.2. OPERATOR ALGEBRA 19. If S is a subset of E, then the set

2.2. OPERATOR ALGEBRA 19. If S is a subset of E, then the set 2.2. OPERATOR ALGEBRA 19 2.2 Operator Algebra 2.2.1 Algebra of Operators on a Vector Space A linear operator on a vector space E is a mapping L : E E satisfying the condition u, v E, a R, L(u + v) = L(u)

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Computational Aspects of Aggregation in Biological Systems

Computational Aspects of Aggregation in Biological Systems Computational Aspects of Aggregation in Biological Systems Vladik Kreinovich and Max Shpak University of Texas at El Paso, El Paso, TX 79968, USA vladik@utep.edu, mshpak@utep.edu Summary. Many biologically

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

16.1 L.P. Duality Applied to the Minimax Theorem

16.1 L.P. Duality Applied to the Minimax Theorem CS787: Advanced Algorithms Scribe: David Malec and Xiaoyong Chai Lecturer: Shuchi Chawla Topic: Minimax Theorem and Semi-Definite Programming Date: October 22 2007 In this lecture, we first conclude our

More information

Formal Groups. Niki Myrto Mavraki

Formal Groups. Niki Myrto Mavraki Formal Groups Niki Myrto Mavraki Contents 1. Introduction 1 2. Some preliminaries 2 3. Formal Groups (1 dimensional) 2 4. Groups associated to formal groups 9 5. The Invariant Differential 11 6. The Formal

More information

2.152 Course Notes Contraction Analysis MIT, 2005

2.152 Course Notes Contraction Analysis MIT, 2005 2.152 Course Notes Contraction Analysis MIT, 2005 Jean-Jacques Slotine Contraction Theory ẋ = f(x, t) If Θ(x, t) such that, uniformly x, t 0, F = ( Θ + Θ f x )Θ 1 < 0 Θ(x, t) T Θ(x, t) > 0 then all solutions

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

On the exponential convergence of. the Kaczmarz algorithm

On the exponential convergence of. the Kaczmarz algorithm On the exponential convergence of the Kaczmarz algorithm Liang Dai and Thomas B. Schön Department of Information Technology, Uppsala University, arxiv:4.407v [cs.sy] 0 Mar 05 75 05 Uppsala, Sweden. E-mail:

More information

Welsh s problem on the number of bases of matroids

Welsh s problem on the number of bases of matroids Welsh s problem on the number of bases of matroids Edward S. T. Fan 1 and Tony W. H. Wong 2 1 Department of Mathematics, California Institute of Technology 2 Department of Mathematics, Kutztown University

More information

Stability, Pole Placement, Observers and Stabilization

Stability, Pole Placement, Observers and Stabilization Stability, Pole Placement, Observers and Stabilization 1 1, The Netherlands DISC Course Mathematical Models of Systems Outline 1 Stability of autonomous systems 2 The pole placement problem 3 Stabilization

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Multi-Pursuer Single-Evader Differential Games with Limited Observations*

Multi-Pursuer Single-Evader Differential Games with Limited Observations* American Control Conference (ACC) Washington DC USA June 7-9 Multi-Pursuer Single- Differential Games with Limited Observations* Wei Lin Student Member IEEE Zhihua Qu Fellow IEEE and Marwan A. Simaan Life

More information

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Consensus Problems in Networks of Agents with Switching Topology and Time-Delays

Consensus Problems in Networks of Agents with Switching Topology and Time-Delays Consensus Problems in Networks of Agents with Switching Topology and Time-Delays Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology e-mails: {olfati,murray}@cds.caltech.edu

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

Functional Analysis Review

Functional Analysis Review Functional Analysis Review Lorenzo Rosasco slides courtesy of Andre Wibisono 9.520: Statistical Learning Theory and Applications September 9, 2013 1 2 3 4 Vector Space A vector space is a set V with binary

More information

OPERATOR THEORY ON HILBERT SPACE. Class notes. John Petrovic

OPERATOR THEORY ON HILBERT SPACE. Class notes. John Petrovic OPERATOR THEORY ON HILBERT SPACE Class notes John Petrovic Contents Chapter 1. Hilbert space 1 1.1. Definition and Properties 1 1.2. Orthogonality 3 1.3. Subspaces 7 1.4. Weak topology 9 Chapter 2. Operators

More information

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events

Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events M. Cao Yale Univesity A. S. Morse Yale University B. D. O. Anderson Australia

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 1 Ridge Regression Ridge regression and the Lasso are two forms of regularized

More information

Ordinary Differential Equations II

Ordinary Differential Equations II Ordinary Differential Equations II February 9 217 Linearization of an autonomous system We consider the system (1) x = f(x) near a fixed point x. As usual f C 1. Without loss of generality we assume x

More information

IN AN ALGEBRA OF OPERATORS

IN AN ALGEBRA OF OPERATORS Bull. Korean Math. Soc. 54 (2017), No. 2, pp. 443 454 https://doi.org/10.4134/bkms.b160011 pissn: 1015-8634 / eissn: 2234-3016 q-frequent HYPERCYCLICITY IN AN ALGEBRA OF OPERATORS Jaeseong Heo, Eunsang

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that Problem 1A. Suppose that f is a continuous real function on [, 1]. Prove that lim α α + x α 1 f(x)dx = f(). Solution: This is obvious for f a constant, so by subtracting f() from both sides we can assume

More information

DLM: Decentralized Linearized Alternating Direction Method of Multipliers

DLM: Decentralized Linearized Alternating Direction Method of Multipliers 1 DLM: Decentralized Linearized Alternating Direction Method of Multipliers Qing Ling, Wei Shi, Gang Wu, and Alejandro Ribeiro Abstract This paper develops the Decentralized Linearized Alternating Direction

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Nonlinear Model Predictive Control Tools (NMPC Tools)

Nonlinear Model Predictive Control Tools (NMPC Tools) Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

Notes 10: Consequences of Eli Cartan s theorem.

Notes 10: Consequences of Eli Cartan s theorem. Notes 10: Consequences of Eli Cartan s theorem. Version 0.00 with misprints, The are a few obvious, but important consequences of the theorem of Eli Cartan on the maximal tori. The first one is the observation

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Available online at ScienceDirect. IFAC PapersOnLine 50-1 (2017)

Available online at  ScienceDirect. IFAC PapersOnLine 50-1 (2017) Available online at www.sciencedirect.com ScienceDirect IFAC PapersOnLine 50-1 (2017) 607 612 Distributed Endogenous Internal Model for Modal Consensus and Formation Control S. Galeani M. Sassano Dipartimento

More information

The Multi-Path Utility Maximization Problem

The Multi-Path Utility Maximization Problem The Multi-Path Utility Maximization Problem Xiaojun Lin and Ness B. Shroff School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 47906 {linx,shroff}@ecn.purdue.edu Abstract

More information

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis CS-621 Theory Gems October 18, 2012 Lecture 10 Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis 1 Introduction In this lecture, we will see how one can use random walks to

More information

Calculus I Review Solutions

Calculus I Review Solutions Calculus I Review Solutions. Compare and contrast the three Value Theorems of the course. When you would typically use each. The three value theorems are the Intermediate, Mean and Extreme value theorems.

More information

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations,

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations, SEMI-GLOBAL RESULTS ON STABILIZATION OF LINEAR SYSTEMS WITH INPUT RATE AND MAGNITUDE SATURATIONS Trygve Lauvdal and Thor I. Fossen y Norwegian University of Science and Technology, N-7 Trondheim, NORWAY.

More information