Proceedings of the European Control Conference 27 Kos, Greece, July 2-5, 27 TuC8.2 Sliding mode control for coordination in multi agent systems with directed communication graphs Antonella Ferrara, Giancarlo Ferrari Trecate and Claudio Vecchio Abstract This paper focuses on the control of a team of agents designated either as leaders or followers and exchanging information over a directed communication network. The goal is to drive each follower to a target position that depends on its neighbors. To this purpose, we propose a decentralized control scheme based on sliding mode techniques and study the position error propagation within the network using the notion of Input to State Stability (ISS). In particular, we derive sufficient conditions on the control parameters for guaranteeing that the error dynamics is ISS with respect to the leaders velocities. Moreover, we show that, under suitable assumptions, the sliding mode part of the control law is capable of steering the position errors to zero in finite time. The theoretical results are backed up by numerical simulations. I. INTRODUCTION Over the last few years, the problem of designing decentralized control laws for multi-agent systems has received considerable attention, motivated by applications such as formation flight for unmanned aerial vehicles [6], cooperative control for swarms of robots [], or automated highway systems [7]. In a typical scenario, agents are modeled as dynamical systems that can sense the state of a limited number of team members, hence giving rise to incomplete communication graphs. The main goal is then to control individual agents so as to guarantee the emergence of a global coordinated behaviour. As an example, in consensus problems agents must converge asymptotically to a common state without exploiting the knowledge of a common set point [4], [9], [], [2]. Another form of consensus is leader following where a leader moves independently of all other agents and the control strategy must ensure convergence of followers to the leader position. Beside the asymptotic achievement of the coordination objective, it is also important to quantify how errors propagate through the network during transients, especially when the agent closed loop dynamics is nonlinear. In [3] and [5] it has been shown that Input to State Stability (ISS) provides an appealing framework for studying the team performance since error amplification can be captured by ISS gains. Results in [4] assume nonlinear agent dynamics and acyclic communication graphs, while [3] focus on linear agent dynamics and graphs that can be decomposed into basic interconnection structures including cycles. A. Ferrara, G. Ferrari Trecate and C. Vecchio are with the Department of Computer Engineering and Systems Science, University of Pavia, Via Ferrata, 27 Pavia, Italy. e-mail:{antonella.ferrara, giancarlo.ferrari, claudio.vecchio}@unipv.it. G. Ferrari Trecate is with INRIA, Domaine de Voluceau, Rocquencourt B.P.5, 7853, Le Chesnay Cedex, France. In this paper we consider agents labeled either as leaders or followers and study the problem of driving each follower towards a time varying target location defined in terms of the position of its neighboring agents. Our setup encompasses various coordination objectives such as consensus, leader following and achievement of a stable formation. Under the assumptions that followers obey to linear, fully actuated, non identical dynamics and the communication graph is directed and time invariant, we propose a decentralized control scheme composed by a linear term and a sliding mode term. Differently from [5], where sliding mode control is used in order to make agents minimize a potential function encoding the desired cooperation goals, here the sliding mode component is introduced for speeding up the convergence of position errors to zero. In order to analyze the error propagation within the network, as in [3], [5], we derive conditions guaranteeing that the error dynamics of a follower is ISS with respect to the velocities of its neighbors. As far as the whole network is considered, we also provide sufficient conditions on the control parameters for guaranteeing ISS of the collective error with respect to the leaders velocities. However, differently from the rationale used in [5], where ISS is proved through the composition of elementary ISS interconnections, we exploit recent results on ISS for networks of systems [2] in order to analyze the collective error at once and without assuming constraints on the structure of the communication graph. Finally, we also show that when bounds on the leader velocities are known, it is possible to tune the sliding mode component of the control input in order to steer errors to zero in finite time. The paper is organized as follows. The control problem is described in Section II. In Section III the proposed sliding mode control scheme is introduced. In Section IV we give the conditions the control parameters must satisfy in order to guarantee that the position error of a follower is ISS with respect to the position error of its neighboring followers and to the velocity of its neighboring leaders. Sufficient conditions for extending the ISS property to the whole team of agents are given in Section V. Section VI is devoted to the derivation of conditions for zeroing the errors of follower agents in finite time. A brief discussion on how the control parameters can be chosen in order to fulfill the assumptions of the main theorems are given in Section VII. Simulation results are reported in Section VIII, and final comments in Section IX conclude the paper. ISBN: 978-96-8928-5-5 477
Fig.. An example of communication topology with leaders L = {3,4} and followers F = {,2}. II. PROBLEM STATEMENT Consider a multi agent system composed by N L leader agents, and N F follower agents. Followers and leaders will be indexed by elements of the sets F and L, respectively defined as F := {, 2,...,N F } () L := {N F +,N F + 2,...,N F + N L } (2) The total number of agents is N = N L + N F. Leaders are autonomously driven, while followers are controlled so as to maintain a desired relative displacement with respect to their neighboring agents. In order to capture the topology of the communication network among agents, leaders and followers are arranged into a graph structure. More precisely, we consider a directed graph with nodes N = L F and arcs E N N. Each node v N represents an agent, and an arc e = (k, i) from agent k to agent i means that agent i has access to the state of agent k. In particular, we are not interested in modelling communications channels among leaders and therefore assume that E N F. The set of neighboring agents to agent i F is defined as N i = {j : (j, i) E,j N }. Analogously, the neighboring leaders and followers are elements of the sets L i = {j : (j, i) E, j L}, and F i = {j : (j, i) E,j F}, respectively. Obviously N i = L i F i. In order to avoid isolated followers, we assume that N i, i F. Analogously, in order to exclude the case of isolated leaders we assume that k L, i F : (k, i) E. In the sequel the total number of neighboring agents of i F, i.e., the number of elements in the set N i, will be denoted by µ i. An example of communication topology which verifies the previous constraints is depicted in Fig.. We assume that all followers obey to linear dynamics, i.e., ẋ i = A i x i + B i u i i F (3) where x i IR n, u i IR n, A i IR nxn, and B i IR nxn is a full rank matrix. Note that system (3) is fully actuated and the invertibility of the matrices B i implies that the system is controllable. We moreover denote with x i, i L also the state of the i th leader although we do not assume the knowledge of leaders dynamics. States x i will be also called positions and ẋ i velocities. We associate with each arc e = (k, i) a vector d ki IR n which is a relative displacement between agent k and agent i (see, for instance, Fig. ). Then, the target position of agent i, with i F, is defined as x d i = k L i α ki (x k d ki ) + α ji (x j d ji ) (4) where α ki, and α ji are real coefficients chosen such that α ki + α ji = i F (5) k L i Note that, for an agent i F, when d ki =, k N i, the position x d i lies in the convex hull of the positions of its neighboring agents. Non zero displacements d ki,k N i, represent additional degrees of freedom that allow one to modify the shape of the neighbors dependent convex hull that contains the desired position x d i. An example is shown in Fig. 2. In a leaderless system (or in the case of a fixed leader) with d ki =, i F, k N i, the asymptotic achievement of the target positions implies the convergence to a consensus state (or convergence to the leader state). If the graph of the communication network is a tree, in which the leader corresponds to the root node and all the edges are directed from parent to a child nodes, then the achievement of the desired position leads to a formation with a predefined shape since all the followers have only one agent to follow, i.e., µ i =, i F. Fig. 2. Multi agent system with three leaders and one follower with positions in IR 2. Panel A: Communication network and displacements. Panel B: Leader dependent polytope containing the desired follower positions. The position error of agent i F, is defined as = x d i x i (6) and the error dynamics is given by = α ki ẋ k + α ji ẋ j ẋ i (7) k Li Remark : By assumption, matrices B i are invertible and therefore it is possible to perform a change of state ˆx i = B i x i and choose a control law u i = A iˆx i + u i in order to have that the follower dynamics is single integrator, i.e., ˆx i = u i. Moreover, the state transformation is different for 478
each agent, and the target position in the transformed state results in ˆx d i = α ki B i (x k d ki ) + α ji B i (B jˆx j d ji ) k L i (8) Since matrices B i and B j,j F i appear in (8), the use of a transformed state does not lead to any substantial simplification of the control scheme discussed in Section III. For this reason, in the sequel we will use the original state x i, instead of ˆx i. Our first goal is to design a decentralized control law, i.e., u i ({x j } j Ni ), i F, in order to guarantee bounded position errors, as far as the leaders velocities ẋ k, k L are bounded. As in [3], [4], [5], this concept will be precisely captured by the notion of input to state stability [8] for the multi agent system. For a signal x(t), t, its restriction to the time interval [t,t 2 ] will be denoted as x [t,t 2] and its supremum norm with x. Definition : The position error of a follower i F, is input to state stable (ISS) with respect to x j,j F i, and ẋ k,k L, if there exist a function β i (, ) of class KL and functions γ ij ( ), γ ik ( ) of class K (called ISS gain functions) such that, for any initial error (), the solution (t) of (7) verifies, for all t (t) β i ( (),t) + γ ij ( x j[,t] ) + k L i γ ik ( ẋ k[,t] ). (9) Let us introduce the collective error x as x = [ x, x 2,..., x NF ] T () Definition 2: The collective error is ISS with respect to ẋ k,k L, if there exist a function β(, ) of class KL, and functions γ k ( ) of class K such that x(t) β( x(), t) + k Lγ k ( ẋ k[,t] ) () for all t. Note that, in the leaderless case, i.e., L =, Definition 2 implies that x = is a global asymptotic stable equilibrium for the error dynamics (7). III. THE PROPOSED CONTROL SCHEME In this section we introduce a control scheme capable of guaranteeing the ISS property for the followers errors and the collective error. Moreover, when bounds on leaders velocities and followers position errors are known, in addition to the attainment of the ISS property, the proposed control scheme will be also able to steer the position errors to zero in finite time. Let e i = B i i F, is A i x d i. The proposed control law for agent u i = K i + η i + e i (2) where K i, η i IR nxn. The control law is characterised by two parameters, i.e., K i and η i. The matrix K i is the feedback gain, while η i is the gain of /. The term K i is a classical linear state feedback while the term η i / introduces a sliding mode control component. In particular, the sliding mode component will be used to enforce convergence to zero of the position errors in finite time, as discussed in Section VI. The closed loop followers dynamics can be obtained by substituting (2) in (3), thus obtaining ẋ i = A i x i + B i K i + B i η i A ix d i = φ i + σ i (3) where φ i = (A i B i K i ), and σ i = B i η i. From (3) and (7), the closed loop error dynamics results in = φ i σ i + k L i α ki ẋ k + x j α ji (φ j x j + σ j x j ) (4) IV. THE ISS PROPERTY FOR THE FOLLOWERS ERROR In this section we discuss the condition that the control parameter K i, and η i in (2) must satisfy in order to guarantee that the position error of each agent is ISS. In this case, we also provide a closed form expression of the ISS gain functions. In the sequel, λ min ( ), and λ max ( ) will be used to denote the minimum and maximum eigenvalue of positive definite matrix, respectively. Theorem : Assume that the closed loop dynamics of the follower i F is given by (3) where K i is chosen such that φ i is positive definite, and η i is chosen such that σ i is positive semidefinite. If the matrices σ i and σ j,j F i verify λ min (σ i ) α ji λ max (σ j ) (5) then the position error of the i th follower is ISS with respect to x j, j F i, and ẋ k, k L i. Moreover, for all θ (,), the functions γ ij (r) = µ iα ji θ γ ik (r) = µ iα ki θ provide the ISS gains appearing in (9). λ max (φ j ) λ min (φ i ) r j F i (6) λ min (φ i ) r k L i (7) Proof: Consider the following candidate Lyapunov function for the error dynamics (4) V i ( ) = 2 xt i. (8) 479
Then, V i ( ) = x T i = x T i φ i x T i σ i + x T i + x T i k L i α ki ẋ k + x T i x α ji σ j j x j α ji φ j x j If σ i verifies the inequality (5), one has x T i σ i + xt x i α ji σ j j x j λ min (σ i ) + α ji λ max (σ j ) (λ min (σ i ) α ji λ max (σ j )) (9) (2) Therefore, if (5) holds, it follows that V i ( ) x T i φ i + x T i k L i α ki ẋ k + x T i α ji φ j x j λ min (φ i ) 2 + k L i α ki ẋ k + α ji λ max (φ j ) x j (2) From (2), it turns out that for all such that { αki ẋ l µ i max j F,k L θλ min (φ i ) ; α jiλ max (φ j ) x j } (22) θλ min (φ i ) where θ (, ), the following inequality holds V i ( ) ( θ)λ min (φ i ) 2 (23) Thus, by applying Theorem.4. in [8], from (22) and (23), it results that V i ( ) is an ISS Lyapunov function for (4), i.e., exist β i (, ) of class KL such that (t) β i ( (),t) + γ ij ( x j[,t] ) + k L i γ ik ( ẋ k[,t] ) (24) for all t, where gain functions γ ij ( ), and γ ik ( ) are defined by (6) and (7). V. ISS PROPERTY OF THE COLLECTIVE ERROR It is known that, under suitable assumptions, the interconnections of ISS systems is ISS as well. In particular, if two ISS systems are arranged in a feedback loop, one can apply the small gain theorem [8] which states that if the composition of the gain functions γ ( ), γ 2 ( ) of the ISS subsystems is small enough, then the whole system is ISS. In [2], the small gain theorem has been generalized to arbitrary interconnections of ISS systems. This is precisely the tool we will use for establishing the ISS property of the collective error () under the control law (2) and for general network topologies. In the sequel, ρ(a) will denote the spectral radius of a given matrix A. The results obtained in this section are mainly based on Corollary 7 in [2], which is here reported for the reader convenience: Corollary : [2] Consider n interconnected systems ẋ = f (x,...,x n,u).. (25) ẋ n = f n (x,...,x n,u) where x i IR N i, u IR L. Assume that each subsystem i is ISS, i.e., there exist a function β i (, ) of class KL and functions γ ij ( ), γ( ) of class K such that the state x i (t), with initial condition x i (), satisfies x i (t) β i ( x i (),t) + n γ ij ( x j[,t] ) j= + γ( u [,t] ) (26) for all t. Introduce the gain matrix Γ : IR + n IR + n defined as ( n n T Γ(s,...,s n ) T = γ j (s j ),..., γ NF j(s j )) (27) j= j= where s = (s,...,s NF ) T IR n +. If the gain matrix Γ is a linear operator, i.e., Γ(s) = Γs, and its spectral radius ρ fulfills ρ(γ) < (28) then system (25) is ISS. We are now in a position to state the main result of this section Theorem 2: Assume that all followers verify the assumptions of Theorem. If in addition, for all followers i F, the scalars α ji and α ki in (4) and K i in (2) are chosen such that ρ(γ ) < (29) where Γ is the matrix with elements { if j F i (Γ) ij = λ µ i α max(φ j) (3) ji λ min (φ i ) if j F i then the collective error is ISS with respect to ẋ k, k L. Proof: From Theorem, we have that, i F, t (t) β i ( (),t) + j F γ ij ( x j[,t] ) + k L γ ik ( ẋ k[,t] ) i F (3) where, for notation simplicity, we have set γ ij (r) = γ ik (r) = if j F i if k L i The collective error x, defined as in (), can be interpreted as the interconnection of the N F ISS systems with state, i F. The gain functions (6) of the position error of an agent i F are linear functions. Hence, the gain matrix Γ(s) is a linear operator and (27) can be written as γ 2... γ NF γ 2... γ 2NF Γ(s) = Γs =...... s (32) γ NF γ NF 2... 48
where the expression of γ ij is given in (6). The gain matrix (32) can be rewritten as Γ = θ Γ (33) where θ (,). Thus, since ρ(γ) = ρ(γ )/θ, if (29) holds then θ (, ) such that ρ(γ) <. The result follows from the application of Corollary. VI. CONVERGENCE TO ZERO IN FINITE TIME OF THE POSITION ERROR Under the assumptions of Theorem 2, the collective error is ISS with respect to ẋ L. Therefore, from () one has that if ẋ L is bounded, then s also bounded. As a further result, in this section we show that the proposed control law is also able to steer the position error of all followers to zero in finite time. Theorem 3: Under the assumption of Theorem 2, if all matrices η i, i F are chosen such that (σ i Ω i ) φ i + k L i α ki ẋ k + x j α ji (φ j x j + σ j x j ) (34) for some Ω i = diag(ω,...,ω n ) >, and σ i = B i η i = diag(ε,...,ε n ) > (35) then the position errors are steered to zero in finite time. Proof: On the basis of the bounds on ẋ L and x, η i are chosen such that (34) is satisfied. The derivative of the Lyapunov function V i ( ) in (8) results in V i ( ) = x T i φ i x T i σ i + x T i k L i α ki ẋ k + x T i + x T x i α ji σ j j x j λ min (Ω i ) Equation (36) can be rewritten as α ji φ j x j (36) V i ( ) λ min (Ω i ) 2V i ( ) (37) By integrating (37), the time taken to reach =, denoted by t si satisfies 2 Vi ( ()) t si (38) λ min (Ω i ) and this implies that is steered to zero in finite time. Relying on Theorem 3, the quantity can be regarded as a sliding variable S i [3]. If (34) is verified, it turns out that S T i Ṡ i λ min (Ω i ) S i (39) Equation (39) is the so called reachability condition [3] and this implies that the proposed control law (2) will enforce a sliding mode on the sliding manifolds S i =, i F, in finite time. This means that, after a finite time interval, position errors are steered to zero, i.e., =, in finite time. VII. DISCUSSION ON THE CONTROL SYNTHESIS PROCEDURE In this section we will discuss how to choose the control parameters in order to verify all the assumptions of Theorems, 2 and 3. As for Theorem one has that If K i is chosen as K i = B i (A i + i ) where i is a positive definite matrix, then it follows that φ i = i is positive definite. If η i is chosen as B i ˆη i, where ˆη i is a positive semidefinite matrix, then σ i = B i η i = ˆη i is positive semidefinite. The inequality (5) can be fulfilled by choosing σ i = ˆσI ˆσ i, i F (4). Indeed, from (4) we have that (5) leads to the inequality ˆσ ˆσ α ji i F (4) that, in view of (5), is always verified. As for Theorem 2, the key difficult is that the computation of matrices K i verifying (29) for given scalars α ij, is a nonconvex optimization problem. However, if the gains φ i = (A i B i K i ) are fixed, the problem of finding scalars α ij that verify (29) is much easier, as it is shown in the sequel. This amounts to revise the control specifications provided by (4) for enforcing ISS of the collective error. If we choose the matrices φ i such that λ max (φ i ) = β λ min (φ i ) = α i F (42) we have that ρ(γ ) = β α Γ 2 where Γ 2 is the matrix with elements { if j Fi (Γ 2 ) ij = (43) µ i α ji if j F i If ρ(γ 2 ) < then it is always possible to find a β/α such that ρ(γ ) <. The problem of finding coefficients α ij in Γ 2 guaranteeing that ρ(γ 2 ) <, and verifying the constraints α ij < i : L i (44) α ij = i : L i = (45) is a semidefinite programming problem [] for which efficient solvers exist. Note that, if the graph of the communication network is a tree, in which the leader corresponds to the root node and all edges are directed from parent to child nodes then the matrix Γ 2 is strictly upper triangular. In this case, ρ(γ 2 ) = and condition (29) is satisfied. As for Theorem 3 we have that If matrices η i are chosen such that (4) holds then condition (35) is satisfied. A suitable value for η i for which condition (34) is verified can be found, for instance, by trial and error on simulation. 48
6 5 x d x x 2 d x2 4 3 2 Fig. 3. The topology of the multi agent system considered in Section VIII, Case A. 2 3 4 5 Time [s] A. Case A VIII. SIMULATION RESULTS As an example, we consider the multi agent system represented in Fig. 3, composed by two followers and one leader, indexed by the sets F = {, 2}, and L = {3}, respectively. We assume that all agents i {,2, 3} obey to the dynamics [ ] [ ] ẋix uix ẋ i = = = u ẋ iy u i (46) iy Agent is assigned to follow agent 2 and agent 3 with a displacements d 2 = [, ] T, and d 3 = [2,3] T, respectively. Agent 2 follows agent with displacement d 2 = [, ] T. Thus, L = {3}, F = {2}, L 2 =, and F 2 = {}. According to (6), the position error for agent is defined as x = ( α)(x 2 d 2 ) + α(x 3 d 3 ) x with α (, ). Similarly, the position error for agent 2 is x 2 = x d 2 x 2 The control law (2) results in u i = K i + η i i {,2} In order to fulfill the assumptions of Theorem, K = φ, and K 2 = φ 2 are chosen as positive definite matrices and η = σ, η 2 = σ 2 are chosen as positive semidefinite matrices. In order to fulfill (5), η is chosen such that λ min (η ) ( α)λ max (η 2 ) (47) Then, from Thorem, x is ISS with respect to x 2 and ẋ 3, and, from (6) and (7), the ISS gain functions are γ 2 (r) = 2( α)λ max(k 2 ) r θλ min (K ) 2α γ 3 (r) = θλ min (K ) r As for agent 2, η 2 is chosen such that (5) is satisfied, i.e., λ min (η 2 ) λ max (η ) (48) Fig. 4. The evolution of x (t) and x 2 (t) for the Case A discussed in Section VIII. Then, x 2 is ISS with respect to x, and, from (6), the ISS gain function is given by γ 2 (r) = λ max(k ) θλ min (K 2 ) r In order to make the collective error, i.e., x = [ x, x 2 ] T, ISS with respect to ẋ 3, α, K, and K 2 must be chosen so as to satisfy the assumptions of Theorem 2. The matrix Γ for the considered multi agent system is given by Γ = [ 2( α)λ max (K 2 ) λ min (K ) λ max (K ) λ min(k 2) ] (49) Note that, in this case, condition (29) is equivalent to the well-known small gain theorem [2]. From Theorem 2, the collective error is ISS if and only if γ 2 γ 2 = ( α) λ max(k 2 )λ max (K ) λ min (K )λ min (K 2 ) < (5) By selecting α = 2 3 = 2I K K 2 = 2I condition (5) is satisfied, thus the collective error is ISS. In order to satisfy conditions (47), and (48), a possible choice for η, and η 2 is η = η 2 = ˆηI, ˆη ) Case A: A first simulation is performed with u 3 = [.5,.5] T, and ˆη =. This means that the sliding mode component of the control law is not used. Fig. 4 shows the evolution of x, and x 2. As one can note, both x and x 2 are bounded but they do not reach zero. More specifically, x (t).4 and x 2 (t).4 for all t 3. 482
2) Case A2: The second simulation case is performed with u 3 as in Case, but with ˆη =. Fig 5 shows the evolution in time of x, and x 2. Differently from Case A, now both x and x 2 go to zero in finite time. More specifically, x (t) = x 2 (t) = for all t 2.5. 6 5 4 3 2 x d x x 2 d x2.5.5 2 2.5 3 3.5 4 4.5 5 Time [s] Fig. 5. The evolution of x (t) and x 2 (t) for the Case A2 discussed in Section VIII. B. Case B We consider the multi agent system represented in Fig. 6, composed by three followers and two leaders, indexed by the sets F = {, 2,3}, and L = {4, 5}, respectively. As in the previous simulation example, we assume that all agents obey to the dynamics (46), for i {, 2, 3,4, 5}. The position errors for agent, 2, and 3 are given by x = α 2 (x 2 d 2 ) + α 3 (x 3 d 3 ) + α 4 (x 4 d 4 ) x x 2 = α 2 (x d 2 ) + α 52 (x 5 d 52 ) x 2 x 3 = α 3 (x d 3 ) + α 23 (x 2 d 23 ) x 3 where d 2 = [2,] T, d 3 = [, 2] T, d 4 = [, ] T, d 2 = [ 2,] T, d 52 = [,] T, d 3 = [,2] T, d 23 = [, 2] T, α 2 + α 3 + α 4 =, α 2 + α 52 =, and α 3 + α 23 =. The control law (2) is applied to the follower agents. In order to fulfill the assumptions of Theorem, K = φ, K 2 = φ 2, K 3 = φ 3 are chosen as positive definite matrices and η = σ, η 2 = σ 2, η 3 = σ 3 are chosen as positive semidefinite matrices. In order to fulfill (5), the matrices η, η 2, and η 3 are chosen as η = η 2 = η 3 = ˆηI, ˆη From (6), the gain functions γ ij are γ 2 = 3α 2λ max (K 2 ) θλ min (K ) γ 3 = 3α 3λ max (K 3 ) θλ min (K ) γ 2 = 2α 2λ max (K ) θλ min (K 2 ) γ 3 = 2α 3λ max (K ) θλ min (K 3 ) γ 32 = 2α 23λ max (K 2 ) θλ min (K 3 ) We choose K, K 2, and K 3 as K = K 2 = K 3 = ˆKI, ˆK > From (43), the matrix Γ 2 results in 3α 2 3α 3 Γ 2 = 2α 2 2α 3 2α 23 The characteristic polynomial of Γ 2 results in λ 3 (6α 2 α 2 + 6α 3 α 3 )λ 2α 2 α 23 α 3 = In order to fullfill condition (29), a possible choice of the parametes α ij is α 2 = /6 α 3 = /6 α 4 = 4/6 α 2 = /6 α 52 = 5/6 α 3 = /2 α 23 = /2 Fig. 6. The topology of the multi agent system considered in Section VIII, Case B. ) Case B: A first simulation case is performed with u 4 = u 5 = [.5,.5] T, ˆη =, and ˆK =. Therefore the sliding mode component of the control law is thread off. Fig 7 shows the evolution in time of x, x 2, and x 3. As one can note, all position errors are bounded but they do not reach zero since x (t).4, x 2 (t).4 and x 3 (t).4 for all t 2.5. 2) Case B2: The second simulation case is performed with u 4, u 5 and ˆK as in Case B, but with ˆη =. The evolution in time of x, x 2 and x 3 is reported in Fig 8. In this case, all the position errors go to zero in finite time. More specifically, x (t) = x 2 (t) = x 3 (t) = for all t 2. 483
8 7 6 5 x d x x 2 d x2 x 3 d x3 the transmission channel. X. ACKNOWLEDGMENT This work has been partially done in the framework of the HYCON Network of Excellence, contract number FP6 IST 5368 4 3 2.5.5 2 2.5 3 3.5 4 4.5 5 Time [s] Fig. 7. The evolution of the position errors for the Case B discussed in Section VIII. 8 7 6 5 4 3 2 x d x x 2 d x2 x 3 d x3.5.5 2 2.5 3 3.5 4 4.5 5 Time [s] Fig. 8. The evolution of the position errors for the Case B2 discussed in Section VIII. REFERENCES [] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 24. [2] S. Dashkovskiy, B.S. Ruffer, and F.R. Wirth. An ISS small gain theorem for general networks. Mathematics of Control Signals and Systems, to appear 26. [3] C. Edwards and K.S.Spurgeon. Sliding mode control: theory and applications. Taylor & Francis, 998. [4] G. Ferrari-Trecate, A. Buffa, and M. Gati. Analysis of coordination in multi-agent systems through partial difference equations. IEEE Trans. on Autom. Control, 5(6):58 63, 26. [5] V. Gazi. Swarm aggregations using artificial potentials and sliding mode control. IEEE Trans. on Robotics, 2(6):28 24, 25. [6] F. Giulietti, L. Pollini, and M. Innocenti. Autonomous formation flight. IEEE Control System Magazine, 2(6):34 44, 2. [7] R. Horowitz and P. Varaiya. Control design of an automated highway system. Proc. IEEE, 88(7):93 925, 2. [8] A. Isidori. Nonlinear Control System II. Springer-Verlag, 999. [9] A. Jadbabaie, J. Lin, and A. S. Morse. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. on Automatic Control, 48(6):988, 23. [] L. Moreau. Stability of multiagent systems with time-dependent communication links. IEEE Trans. on Automatic Control, 5(2):69 82, 25. [] P. Ögren, M. Egerstedt, and X. Hu. A control Lyapunov function approach to multi-agent coordination. IEEE Trans. on Robotics and Automation, 8(5):847 85, 22. [2] R. Olfati-Saber and R. Murray. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans on Autom. Control, 49(9):52 533, 24. [3] H. Tanner, V. Kumar, and G. Pappas. Stability properties of interconnected vehicles. In Proceedings of the 5 th International Symposium on Mathematical Theory of Networks and Systems, South Bend, IN, 22. [4] H. Tanner and G. Pappas. Formation input to state stability. In 5 th IFAC World Congress, Barcelona, Spain, 22. [5] H. Tanner, G. Pappas, and V. Kumar. Leader to formation stability. IEEE Trans. on Robotics and Automation, 2(3):433 455, 24. IX. CONCLUSIONS In this paper a decentralized sliding mode control for a multi agent system with a directed communication network has been presented. The control objective is to drive each follower agent to a target position that depends on its neighbors. The propagation of the position errors within the network has been studied using the notion of ISS. In particular, we have shown that, under sufficient conditions on the control parameters, the proposed control scheme is capable to guarantee that the collective error dynamics is ISS with respect to the leaders velocities. Moreover, it has been proved that, under suitable assumptions, the sliding mode part of the control law is capable of steering the position errors to zero in finite time. Simulation results are presented to demonstrate the effectiveness of the proposed control approach. The main limitation of our approach is that fully-actuated follower dynamics have to be assumed. Future research will focus on methods for relaxing this assumption and generalizations of the control scheme to the case of perturbations affecting the follower behaviour and 484