Event-triggered control of multi-agent systems with double-integrator dynamics: Application to vehicle platooning and flocking algorithms

Similar documents
Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Event-Triggered Control for Synchronization

Lectures 25 & 26: Consensus and vehicular formation problems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Multi-Robotic Systems

An Event-Triggered Consensus Control with Sampled-Data Mechanism for Multi-agent Systems

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

Event-triggered control subject to actuator saturation

A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology

Robust Connectivity Analysis for Multi-Agent Systems

Control of Multi-Agent Systems via Event-based Communication

Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren, Member, IEEE

Distributed Event-Based Control for Interconnected Linear Systems

Average Consensus with Prescribed Performance Guarantees for Multi-agent Double-Integrator Systems

Consensus Problems in Networks of Agents with Switching Topology and Time-Delays

Formation Control and Network Localization via Distributed Global Orientation Estimation in 3-D

OUTPUT CONSENSUS OF HETEROGENEOUS LINEAR MULTI-AGENT SYSTEMS BY EVENT-TRIGGERED CONTROL

MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY

This manuscript is for review purposes only.

DISTRIBUTED CONTROL OF MULTI-AGENT SYSTEMS: PERFORMANCE SCALING WITH NETWORK SIZE

Università degli Studi di Padova

Multi-agent Second Order Average Consensus with Prescribed Transient Behavior

1520 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 9, SEPTEMBER Reza Olfati-Saber, Member, IEEE, and Richard M. Murray, Member, IEEE

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

Distributed Optimization over Networks Gossip-Based Algorithms

Agreement Problems in Networks with Directed Graphs and Switching Topology

arxiv: v2 [cs.ro] 26 Sep 2016

Postprint.

A Distributed Newton Method for Network Utility Maximization, II: Convergence

NOTES ON LINEAR ODES

Automatic Control Systems theory overview (discrete time systems)

ANALYSIS OF CONSENSUS AND COLLISION AVOIDANCE USING THE COLLISION CONE APPROACH IN THE PRESENCE OF TIME DELAYS. A Thesis by. Dipendra Khatiwada

Virtual leader approach to coordinated control of multiple mobile agents with asymmetric interactions

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications

Fast Linear Iterations for Distributed Averaging 1

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)

Lecture 15 Perron-Frobenius Theory

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Consensus, Flocking and Opinion Dynamics

Consensus of Information Under Dynamically Changing Interaction Topologies

On the Scalability in Cooperative Control. Zhongkui Li. Peking University

Stability Analysis of Stochastically Varying Formations of Dynamic Agents

Non-Collision Conditions in Multi-agent Robots Formation using Local Potential Functions

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Recent Advances in Consensus of Multi-Agent Systems

A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies

Systems of Linear ODEs

Subgradient methods for huge-scale optimization problems

Gramians based model reduction for hybrid switched systems

How Many Leaders Are Needed for Reaching a Consensus

THe increasing traffic demand in today s life brings a

EE363 homework 8 solutions

FORMATION control of networked multi-agent systems

Disturbance Propagation in Vehicle Strings

Flocking while Preserving Network Connectivity

Rings, Paths, and Paley Graphs

Decentralized Event-Triggering for Control of Nonlinear Systems

NOWADAYS, many control applications have some control

Conceptual Questions for Review

Coordinated Path Following for Mobile Robots

Consensus Protocols for Networks of Dynamic Agents

Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules

Double Kernel Method Using Line Transect Sampling

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors

Lecture: Modeling graphs with electrical networks

Event-based Motion Coordination of Multiple Underwater Vehicles Under Disturbances

Distributed Adaptive Consensus Protocol with Decaying Gains on Directed Graphs

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies

Lecture 10: October 27, 2016

ON SEPARATION PRINCIPLE FOR THE DISTRIBUTED ESTIMATION AND CONTROL OF FORMATION FLYING SPACECRAFT

Chapter 7 Distributed Event-Based Control for Interconnected Linear Systems

Iterative Methods for Solving A x = b

Control of Mobile Robots

An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination

Formation Control of Nonholonomic Mobile Robots

Distributed Receding Horizon Control of Cost Coupled Systems

Constrained Optimization and Distributed Computation Based Car-Following Control of A Connected and Autonomous Vehicle Platoon

Lecture 8 : Eigenvalues and Eigenvectors

Information Flow and Cooperative Control of Vehicle Formations

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

An Algorithmist s Toolkit September 10, Lecture 1

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

w T 1 w T 2. w T n 0 if i j 1 if i = j

1 Lyapunov theory of stability

Lecture 3: Huge-scale optimization problems

Consensus Problems on Small World Graphs: A Structural Study

Distributed Event-Based Control Strategies for Interconnected Linear Systems

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

ORIE 6334 Spectral Graph Theory September 22, Lecture 11

Tracking control for multi-agent consensus with an active leader and variable topology

A strongly polynomial algorithm for linear systems having a binary solution

Consensus Stabilizability and Exact Consensus Controllability of Multi-agent Linear Systems

On Spectral Properties of the Grounded Laplacian Matrix

On Agreement Problems with Gossip Algorithms in absence of common reference frames

Stability and Disturbance Propagation in Autonomous Vehicle Formations : A Graph Laplacian Approach

Transcription:

Master Thesis 16 Event-triggered control of multi-agent systems with double-integrator dynamics: Application to vehicle platooning and flocking algorithms Steffen Linsenmayer December, 014 Examiner: Supervisor: Prof. Dr.-Ing. Frank Allgöwer Dr. Dimos V. Dimarogonas Dipl.-Ing. Rainer Blind Institute for Systems Theory and Automatic Control University of Stuttgart Prof. Dr.-Ing. Frank Allgöwer Automatic Control Laboratory School of Electrical Engineering KTH Royal Institute of Technology, Sweden

Kurzfassung Das Hauptthema dieser Arbeit ist ereignisbasierte Regelung von Multiagentensystemen mit Doppelintegratordynamik. Das vorgeschlagene Schema zwingt jeden Agenten seine Zustandsinformationen nur zu diskreten Zeitpunkten auszusenden. Die Arbeit ist in zwei Teile unterteilt. Im ersten Teil werden Möglichkeiten zur ereignisbasierten Regelung von Fahrzeugen in Kolonnen vorgestellt. Hierzu werden die Ereignisse durch eine Regel detektiert, die nur vom Zustand des jeweiligen Agenten und der Zeit abhängt. Es werden zwei verschieden Reglerarchitekturen betrachtet. Die eine nutzt nur Information des voraus fahrenden Fahrzeugs, während in der anderen auch Informationen des nachfolgenden Fahrzeugs verwendet werden. Zunächst werden lineare Regler untersucht, um darauf aufbauend das Verhalten unter dem Einfluss von Störungen und einen nichtlinearen Regler zu untersuchen. Die Ergebnisse garantieren Beschränktheit der Zustände, sowie für die linearen Regler die Möglichkeit asymptotische Stabilität zu erreichen. Des Weiteren wird die Existenz einer unteren Schranke für die Zeit zwischen zwei Ereignissen bewiesen. Der zweite Teil liefert die Analyse eines bekannten Algorithmus zur Simulation von Schwarmverhalten unter ereignisbasierter Kommunikation. Dazu werden verschiedene Kommunikationsstrategien betrachtet und jeweils geeignete Regeln zum Detektieren der Ereignisse hergeleitet. Die Ergebnisse zeigen, dass die bekannten Eigenschaften des Algorithmus, wie Vermeidung von Kollisionen, im vorgestellten ereignisbasierten Schema erhalten bleiben. In beiden Abschnitten werden die theoretischen Ergebnisse durch numerische Simulationen unterstützt. iii

Abstract The main topic of this thesis is event-triggered control for multi-agent systems with double-integrator dynamics. The proposed scheme forces every agent to broadcast its state information only at discrete event times. The thesis is divided into two parts. The first part presents methods to control vehicular platoons under event-based communication, where events are determined by a trigger rule that only depends on the agent s individual state and on time. Two control architectures are considered. One with information from the front and back neighbor and the other one only depending on predecessor information. The investigation starts with linear controllers and shows extensions for disturbed systems and one nonlinear controller. The results guarantee boundedness of the states and in the linear cases the possibility to achieve asymptotic stability. Furthermore the existence of a lower bound on the inter-event times is guaranteed. In the second part the application of event-based communication to a known flocking algorithm is analyzed. Different types of communication are treated and trigger rules depending on the agent s state are derived. The results guarantee, that the known properties of the algorithm, especially collision avoidance are preserved in this event-triggered scheme. The theoretical results of both parts are supported with numerical simulations. v

Contents 1. Introduction 1. Preliminaries 5 I. Event-triggered control for vehicle platooning 9 3. Problem statement 11 4. Symmetric bidirectional control architecture 15 4.1. Linear controller........................... 17 4.. Linear controller under disturbances............... 6 5. Predecessor-following control architecture 31 5.1. Linear controller........................... 3 5.. Linear controller under disturbances............... 37 5.3. Nonlinear controller........................ 38 6. Simulations of event-triggered platooning 49 II. Event-triggered flocking algorithm 57 7. Analysis of event-triggered flocking 59 7.1. Only velocity information triggered............... 65 7.. Position and velocity information triggered........... 68 7.3. Results................................ 73 8. Simulations of event-triggered flocking 77 9. Conclusions and Outlook 81 vii

List of Symbols absolute value of a number or cardinality of a set -norm of a vector or matrix G = {V,E} graph G consisting of node set V and edge set E A(G), D(G) adjacency and degree matrix of graph G N, N r number of real and fictitious agents p i absolute position of agent i p i,0, v i,0 initial position and velocity of agent i (i,j) desired gap between vehicle i and j τ i inter-event time of agent i t i k kth event time of agent i e i transmitted position error of agent i using first-order hold e di transmitted velocity error of agent i using zero-order hold h i stack vector of e i and e di c 0, c 1,α non-negative parameters in the trigger function k,b control parameters for linear platooning controllers p i, p i relative position and velocity of agent i x stack vector of relative position and velocity information ˆx last transmitted value of x e stack vector of h i for every agent ɛ 1,ɛ,K 1,K bounds on the allowed sector nonlinearities L 1,L Lipschitz constants of the nonlinearities m spacial dimension p r,ṗ r position and velocity of the group objective σ σ- norm of a vector σ parameter for σ-norm r, d sensing radius and desired inter-agent distance e p, e pd stack vector of e i, e di for all agents M Lipschitz constant for potential function θ i, φ i parameters for trigger rule of agent i for event-triggered flocking 1 N all ones vector with length N I N N N identity matrix ix

List of Figures.1. Example for a communication graph G with fictitious leader agent 1............................... 5 3.1. Desired configuration with constant spacing and constant velocities with a fictitious leader agent 0............ 11 3.. Information exchange in symmetric bidirectional architecture with a fictitious leader agent 0.................. 1 3.3. Information exchange in predecessor-following architecture with a fictitious leader agent 0.................. 13 5.1. Structure for Proposition 5.3.1................... 39 6.1. Evolution of x(t), e(t) and inter-event times with eventtriggered linear symmetric bidirectional controller....... 50 6.. Evolution of x(t) and inter-event times with time-triggered linear symmetric bidirectional controller............ 51 6.3. Evolution of x(t), e(t) and inter-event times with eventtriggered linear symmetric bidirectional controller under disturbances............................... 5 6.4. Evolution of x(t), e(t) and inter-event times with eventtriggered linear predecessor-following controller........ 53 6.5. Position of real agents and fictitious agent under event-triggered linear predecessor-following controller............. 54 6.6. Sector nonlinearity g(x)...................... 54 6.7. Evolution of x(t), e(t) and inter-event times with eventtriggered nonlinear predecessor-following controller..... 55 6.8. Evolution of x(t) for time-triggered nonlinear predecessorfollowing controller......................... 56 8.1. Function f (z) from Lemma 7..1 and its derivative...... 77 xi

List of Figures 8.. Results for simulated flocking under continuous exchange of information............................. 78 8.3. Results for simulated flocking for triggered position and velocity information.......................... 79 8.4. Inter-event times for position and velocity information being triggered............................... 79 8.5. Inter-event times for triggered velocity information...... 80 8.6. Evolution of the Hamiltonian under event-triggered control. 80 xii

1. Introduction This thesis presents an event-triggered control framework for multi-agent systems and its application to two known control problems with doubleintegrator agents. The first application is vehicle platooning. The goal of vehicle platooning is to run a platoon of vehicles with a desired constant velocity and a desired constant spacing between the vehicles to increase the capacity of roads. The second application is the simulation of flocking behavior. Flocking algorithms try to keep the properties which can be seen with flocks in nature, like separation, alignment and cohesion. The idea of event-triggered control is that the controller is not updated continously but only at certain times. This results in a lower computation effort for the controller. In contrast to time-triggered control these discrete times are not known before, e.g., through a fixed trigger time, but the event times are determined by a predefined trigger rule. One of the first results on event-triggered control introduced an event-based PID controller in [1]. Further work investigated event-based control for larger classes of systems. The work in [7] treats nonlinear systems being rendered input-to-state stable ( [13]) by a given controller. It states a trigger law that guarantees asymptotic stability and guarantees that the time between two events is lower bounded. Enhancements based on this work lead to a Lyapunov framework that guarantees asymptotic stability and can be used to synthesize trigger laws in [19]. In [7] the possibility to apply event-triggered control to networked dynamic systems is mentioned. In these systems a huge improvement is expected if every agent decides on its own when it broadcasts its state information, since then not only the control effort but also the network load can be reduced. This concept is analyzed in [31] for a class of networked dynamic systems with controllers that fulfill a certain matching condition. In [4] the well known controller for the consensus problem from [17] is investigated in an event-triggered implementation. Therefore each agent observes its neighbors states and decides when to update the control. Furthermore an extension, called selftriggered control, is done where the next event time is computed when the last event took place. Therefore the states do not have to be monitored 1

1. Introduction continuously. The work in [] also focuses on the consensus problem, not only in the single-integrator, but also in the double-integrator case. In addition in this work each agent only observes its own state and then decides through the trigger rule when it broadcasts the information to its neighbors. Furthermore the type of the trigger rule changed. In earlier works it was suggested to compare the error between the last transmitted state and the current state of the agent to the norm of the current state, which is also the strategy that will be applied in the second Part of this work. However, in [] the transmitted error of the agent is not compared to the current state of the agent, but it is compared to a time-dependent function which is allowed to asymptotically reach zero in some cases. This concept is used in the first part of the work, where we combine it with existing work on vehicle platooning. Platooning is an important idea to prevent traffic congestion and increase road safety by running a big group of vehicles in a row with small gaps between the vehicles. Two decentralized control architectures only depending on nearest neighbor interaction are stated in [8] to control such a platoon. Since the paper gives a detailed analysis of linear and nonlinear controllers regarding stability and robustness, this is the motivating paper on which the work in the first part is based on. A possible extension to the configuration in [8] is to assume the platoon being equipped with a communication network and therefore the control should be adapted to keep the network load low while preserving the performance of the controller. Therefore the design scheme of [] is fused with control laws from [8] to derive event-triggering rules for platooning in the first part of the thesis. Control of vehicular platoons is a well discovered problem. In [3] the current progress on a project about longitudial control problems in platoons of vehicles is summarized with focus on good ways to model the system and find necessary sensors and actuators to design a controller. In the following years many different aspects of platooning control have been investigated. A lot of work has been done in the analysis of platoon stability. This terminology is based on the string stability concept introduced in [6]. In [33] a controller for a detailed vehicle model is stated that guarantees vehicle stability as well as platoon stability for different spacing strategies and only depends on relative information of front and back neighbor. In case of having a linear vehicle model with two integrators and using the same controller for each vehicle with the goal to reach a constant spacing between the vehicles, [1] gives a computation how disturbances propagate through the platoon. The main result is that if only front, or front and back neighbor information are used in a linear controller it is not possible to

achieve string stability. Similar results on error amplification in case of a controller with front and back neighbor information can be found in []; [18] looks at string stability under communication constraints, assuming the platoon has a communication network. Assuming such a communication network brings up questions about the influence of the network topology on the platoon behavior. The work in [10] and [9] discusses this topic. As mentioned before, [8] gives a good overview over results from investigations of platooning control and therefore it serves as a good basis to build our work on event-triggered control of vehicular platoons. The second multi-agent system consisting of double-integrator agents that is investigated under event-triggered control in this thesis is the problem of flocking. In [0] three rules are stated that lead to flocking. The rules are collision avoidance, velocity matching and flock centering and these rules are used to create computer animations. Later a physical approach in [30] leads to a quantitative continuum theory of flocking and gives theoretical results. Investigations regarding stability of flocking motion come up in [8] where potential functions that grow to infinite potential functions are used to guarantee stable flocking motion. The extension of these results to dynamic networks in [9] uses techniques from [5] to cover the problem of discontinuities that are introduced by switching topologies. In [16] the problem of discontinuities is covered by defining continuous variants of existing properties. The work states three different algorithms. The second one treats flocking with a global objective in free space, whereas the third algorithm also includes obstacle avoidance. Since the second algorithm works with potential functions with a finite cut-off it serves as the basis for the work in this thesis, since it is assumed that infinite potential functions are hard to handle in combination with event-triggered control. More recent work on flocking extends the known results in different ways. One topic, which is covered in [5] and [7] is that not every agent knows about the global objective. This leads to a leader-follower investigation of flocking algorithms. This thesis tries to extend parts of the work in [16] to run it with event-based communication. The thesis starts with some necessary preliminaries given in Chapter. Then it is split into two parts. Part I includes the work on event-triggered controllers for vehicle platooning. Therefore the investigated problem is precisely defined in Chapter 3 and the analysis of the investigated controllers is given in Chapter 4 and Chapter 5. This part ends with simulation results in Chapter 6. Part II is structured in a similar way and contains important definitions, the derivation of the event-triggered controller and the analysis 3

1. Introduction in Chapter 7 as well as simulations in Chapter 8. The work ends with Chapter 9 which gives conclusions for both parts and possibilites for further work for both parts or a possible combination. 4

. Preliminaries In this chapter some basic facts from graph theory that are used in the work are repeated. Furhtermore the basic idea of the event-triggered control scheme used in this thesis is explained and some definitions are stated. The graph class G = (V,E) considered in this thesis consists of a set of nodes or vertices V, with cardinality V = N + N r, where N is the number of real agents and N r is the number of fictitious reference agents, and edges E between these nodes. Two vertices i and j are called adjacent (i j) if there is an edge between them, i.e., (i,j) E. If i j j i the graph is called undirected. All information about the adjacencies is collected in the adjacency matrix A(G) R V V. The ij-entry of the adjacency matrix is 1 if i j and elsewise zero. The diagonal matrix D(G) R V V called degree matrix has the degree, i.e., the number of neighbors of node i, as the corresponding diagonal entry. The Laplacian of a graph is the matrix combining this information in L(G) = D(G) A(G) R V V. Thus for an undirected graph L(G) is symmetric, since the adjacency matrix is symmetric in this case. It is a well known fact, that L(G) is a positive semidefinite matrix and at least one eigenvalue of L(G) equals 0. Furthermore if one eliminates column k and row k for each virtual reference agent k one gets the grounded Laplacian matrix L g (G) R N N, which is used in [10]. If L(G) is symmetric the corresponding grounded Laplacian is symmetric as well and therefore all eigenvalues are real numbers. Many known properties of these matrices and further definitions can be found in the literature, e.g., in [6]. As an example Figure.1 shows an undirected graph with N = 4 real agents {,3,4,5} and N r = 1 fictitious reference agent {1 }. The Laplacian 5 4 3 1 Figure.1.: Example for a communication graph G with fictitious leader agent 1 5

. Preliminaries matrix L(G) R 5 5 for this graph can be computed as 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 L(G) = 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 }{{}}{{} D(G) A(G) 1 1 0 0 0 1 1 0 0 = 0 1 1 0 0 0 1 1 0 0 0 1 1 and therefore by deleting the first row and column the grounded Laplacian matrix 1 0 0 L g (G) = 1 1 0 0 1 1 R4 4 0 0 1 1 can be derived. This example provides already a connection to one of the platooning controllers. This can be seen by comparing Figure.1 to Figure 3.. In the second part the communication network is not necessarily fixed since it depends on the distance between two nodes. Therefore it is possible that the graph switches from time to time. To simplify the analysis the introduced matrices are modified, such that a switch in the network does not introduce discontinuities in the control law. These modifications are explained in detail in the second part of the thesis since they only occur in the flocking algorithm. The key point of the event-triggered control scheme applied in this thesis is that every agent decides on its own at what time it transmits its current state to its neighbors. Each agent continuously monitors its own state and compares it to the last information it sent to its neighbors. The difference of these values is denoted as the transmitted error of agent i. The discrete times when agent i triggers its state information are denoted by t i k with k N and the transmitted state at this time is ˆx = x(t i k ). Our goal in this scheme is to derive a trigger rule, depending only on the current state and 6

the last transmitted state of agent i, as well as on time, that defines the discrete trigger times. The current control each agent applies uses the last transmitted values of its neighbors. Although agent i knows its own state all the time, it uses the last transmitted state x(t i k ) instead of x(t) for all t in the interval [t i k,ti k+1 ) to compute the control. An important measure in event-triggered control is the inter-event time of an agent i, defined as τ i = t k+1 t k. Thinking of these inter-event times a crucial property in event-triggered control is avoiding Zeno behavior. Zeno behavior occurs according to [1] when there is an infinite number of events triggered in finite time. Thus, proving the existence or even computing the value of a lower bound on the inter-execution times τ i for every agent i {1,,...,N} guarantees that Zeno behavior is avoided. 7

Part I. Event-triggered control for vehicle platooning 9

3. Problem statement In this chapter the problem we deal with in the first part is specified. We analyze a platoon of vehicles moving in one dimension. The goal is to control their motion with a decentralized event-triggered controller. The model as well as the control objective are in the style of [8]. Every vehicle in the platoon is modeled as a fully actuated point mass moving in one dimension. This results in the following double integrator dynamics for the absolute position p i of agent i, i {1,...,N}, ] [ṗi (t) p i (t) [ ] [ ] 0 1 pi (t) = + 0 0 ṗ i (t) [ 0 1 ] u i (t), [ ] pi (0) ṗ i (0) = [ ] pi,0 R. (3.1) v i,0 The control objective is to keep a formation which is specified by a fictitious reference agent and positive constant gaps (i 1,i) between vehicles i 1 and i. The gaps do not necessarily need to be equal between all vehicles. Furthermore every car only knows the demanded gaps to its immediate neighbors. The constant velocity trajectory of reference agent 0 is given as p 0 (t) = v 0t + p 0,0. This trajectory is only known to agent 1. Therefore the desired trajectory of agent i can be computed as pi i (t) = p 0 (t) (0,i) = p 0 (t) (j 1,j). (3.) j=1 v N 0 v i 0 v 1 0 0 v 0 (0,1) (i,n) (0,i) (0,N) Figure 3.1.: Desired configuration with constant spacing and constant velocities with a fictitious leader agent 0 11

3. Problem statement N i + 1 i i 1 0 Figure 3..: Information exchange in symmetric bidirectional architecture with a fictitious leader agent 0 From the foregoing properties it is clear, that the desired trajectory is not accessible to agent i = {,...,N}. Therefore each agent tries to keep the desired distances to its predecessor or predecessor and follower depending on the architecture. The desired configuration is illustrated in Figure 3.1. The red circle marks the fictitious reference agent going with the constant velocity v 0. In this figure you see that the followers also move with the desired velocity and that they keep the requested distances between them. To perform this platooning task, [8] states two different control architectures. In the first one, called symmetric bidirectional architecture, the control of each agent depends on relative position and velocity measurements to its front and back neighbor as well as the corresponding desired gaps. A special case is the last vehicle in the platoon. Since it has no following vehicle it only controls the gap to its predecessor. However, the first vehicle in the platoon has no real front neighbor it runs the same control law with the fictitious reference vehicle as its predecessor p 0 (t) = p0 (t). Therefore the control law can be summarized as u i (t) = f (p i (t) p i 1 (t) + (i 1,i) ) g(ṗ i (t) ṗ i 1 (t)) f (p i (t) p i+1 (t) (i,i+1) ) g(ṗ i (t) ṗ i+1 (t)) i {1,,...,N 1}, u N (t) = f (p N (t) p N 1 (t) + (N 1,N) ) g(ṗ N (t) ṗ N 1 (t)). (3.3) The symmetric bidirectional control architecture is illustrated in Figure 3. where the arrows mark exchange of state information. The other control scheme is called predecessor-following and thus it only takes into account relative position and velocity measurement to the front neighbor and the corresponding desired gap. Hence the last vehicle can run exactly the same controller as the other vehicles in this case and the control law can be stated as u i (t) = f (p i (t) p i 1 (t) + (i 1,i) ) g(ṗ i (t) ṗ i 1 (t)) i {1,,...,N}. (3.4) 1

N i + 1 i i 1 0...... Figure 3.3.: Information exchange in predecessor-following architecture with a fictitious leader agent 0 The predecessor-following control architecture is visualized in Figure 3.3. In both cases we assume f,g : R R to be odd and smooth enough to guarantee existence of a solution. In the following chapters different types of functions are analyzed, for example the special case of f and g being Linear functions. With continuous exchange of information these two control architectures are well studied in [8]. There are stability results for linear and nonlinear controllers. Furthermore robustness properties of these controllers are analyzed with analytical and numerical methods. In this thesis they are investigated in an event-triggered framework with event-based communication. The basic concept is that agent i broadcasts its state information only when a trigger function f i ( ) reaches a specific value. To state the trigger function we introduce the transmitted errors for all t : t i k t < ti k+1 and for all i {1,,...,N}, e i (t) = ˆp i + (t t i k ) ˆṗ i p i (t) e di (t) = ˆṗ i ṗ i (t), (3.5) where ˆp i = p i (t i k ) and ˆṗ i = ṗ i (t i k ) are the last transmitted position and velocity values from agent i. In the first row of (3.5) it can be seen that the transmitted error concerning the position information is defined in a way that implies that we will use a first-order hold for this information in the event-triggered controller. From [] we use the trigger condition f i ( ) > 0 and the trigger function [ f i (t,e i (t),e di (t)) = ei e di (t)] ( c 0 + c 1 e αt). (3.6) }{{} =: h i (t) This means that agent i transmits its state to its neighbors as soon as the norm of the transmitted state error h i (t) exceeds a certain bound, i.e., when h i (t) > c 0 + c 1 e αt. 13

4. Symmetric bidirectional control architecture This chapter analyzes event-triggered control for a vehicular platoon with symmetric bidirectional (SB) control architecture like in Figure 3.. In [8] the investigation yields the result that in this architecture the linear controller is performing better than the nonlinear one. Therefore we only investigate the linear special case here. Additionally we analyze the behavior of the controller under influence of certain disturbances. The linear controller from (3.3) combined with the system dynamics (3.1) yields the closed-loop dynamics for all t > t 0 p i (t) = f (p i (t) p i 1 (t) + (i 1,i) ) g(ṗ i (t) ṗ i 1 (t)) f (p i (t) p i+1 (t) (i,i+1) ) g(ṗ i (t) ṗ i+1 (t)) i {1,,...,N 1}, p N (t) = f (p N (t) p N 1 (t) + N 1,N ) g(ṗ N (t) ṗ N 1 (t)) (4.1) in the case that information exchange happens continuously. Since we want to analyze the behavior with event-based communication we state an event-triggered version of the general symmetric bidirectional closedloop. This event-based controller is based on the last transmitted state information. It uses a first order hold to estimate the current position. The velocity information is estimated by the last transmitted value over the whole inter-event time,i.e., p i,estimated = ˆp i + (t t i k ) ˆṗ i ṗ i,estimated = ˆṗ i (4.) for all t that fulfill t i k t < ti k+1. To state the event-triggered controller we have to define the indices of the last triggered states of the front and back neighbors in the case, that agent i triggers an event at time t i k. The goal is to denote the last trigger time of agent i + 1, respectively i 1 by t i+1 k and t i 1 k. Therefore we define: k := arg min k N:t i+1 k t i k t i k ti+1 k, k := arg min t i k N:t i 1 k t i k ti 1 k (4.3) k 15

4. Symmetric bidirectional control architecture Finally we derive the event-based symmetric bidirectional closed-loop p i (t) = f ( ˆp i + (t t i k ) ˆṗ i ˆp i 1 (t t i 1 k ) ˆṗ i 1 + (i 1,i) ) g( ˆṗ i ˆṗ i 1 ) f ( ˆp i + (t t i k ) ˆṗ i ˆp i+1 (t t i+1 k ) ˆṗ i+1 (i,i+1) ) g( ˆṗ i ˆṗ i+1 ), i {1,,...,N 1}, p N (t) = f ( ˆp N + (t t N k ) ˆṗ N ˆp N 1 (t t N 1 k ) ˆṗ N 1 + N 1,N ) g( ˆṗ N ˆṗ N 1 ), (4.4) for all t : t i k t < min{ti k+1,ti 1 k +1,ti+1 k +1 }. The next step is to do a change of coordinates to simplify the further analysis. Therefore the position and velocity errors are defined as the difference between the current and the desired value, i.e., p i (t) := p i (t) p i (t) p i (t) := ṗ i (t) ṗ i (t) = ṗ i(t) v 0. (4.5) The definition of the transmitted errors in (3.5) can be used to compute p i (t) = ˆp i + (t t i k ) ˆṗ i e i (t) p i (t) ˆp i + (t t i k ) ˆṗ i = p i (t) + e i (t) + pi (t) (4.6) as well as p i (t) = ˆṗ i e di (t) v 0 ˆṗ i = p i (t) + e di (t) + v 0 (4.7) for all t : t i k t < ti k+1. These two equations (4.6),(4.7) can be inserted in the event-based symmetric bidirectional closed-loop system (4.4) to derive the closed-loop state error dynamics with event-triggered controller in 16

4.1. Linear controller symmetric bidirectional architecture t > t 0 p i (t) = f ( p i (t) p i 1 (t) + e i (t) e i 1 (t)) g( p i (t) p i 1 (t) + e di (t) e di 1 (t)) f ( p i (t) p i+1 (t) + e i (t) e i+1 (t)) g( p i (t) p i+1 (t) + e di (t) e di+1 (t)) i {1,,...,N 1} p N (t) = f ( p N (t) p N 1 (t) + e N (t) e N 1 (t)) g( p N (t) p N 1 (t) + e dn (t) e dn 1 (t)). (4.8) This differential equation serves as the basis for our further analysis of the undisturbed und disturbed linear controller in the following sections. 4.1. Linear controller As stated in the beginning of this chapter the event-triggered symmetric bidirectional control architecture is investigated in case of linear controllers, i.e., f (z) = kz, g(z) = bz, (4.9) where we demand k and b to be positive. Therefore the closed-loop dynamics simplify to p i (t) = k( p i (t) p i 1 (t) + e i (t) e i 1 (t)) b( p i (t) p i 1 (t) + e di (t) e di 1 (t)) k( p i (t) p i+1 (t) + e i (t) e i+1 (t)) b( p i (t) p i+1 (t) + e di (t) e di+1 (t)) i {1,,...,N 1} p N (t) = k( p N (t) p N 1 (t) + e N (t) e N 1 (t)) b( p N (t) p N 1 (t) + e dn (t) e dn 1 (t)). (4.10) 17

4. Symmetric bidirectional control architecture To simplify the further steps we introduce stack vectors for the state error (x(t)) as well as the transmitted error (e(t)), x(t) = [ p 1 (t) p 1 (t)... p N (t) p N (t) ] T, e(t) = [ e 1 (t) e d1 (t)... e N (t) e dn (t) ] T R N. (4.11) With these stack vectors the linear closed-loop dynamics of the state error from (4.10) can be given in state space representation, i.e., ẋ(t) = A SB x(t) + B SB e(t), x(0) = x 0 R N. (4.1) The matrices A SB and B SB are computed as A 1 + A A 0 A A 1 + A A A SB =......... R N N, A A 1 + A A 0 A A 1 + A A A 0 A A A B SB =......... R N N, (4.13) A A A 0 A A with the auxiliary matrices A 1 = [ ] [ ] 0 1 0 0, A 0 0 =. (4.14) k b It can be seen that the matrices A SB and B SB are structured. To exploit this structure we recap the grounded Laplacian introduced in the second chapter. For the treated symmetric bidirectional control architecture, 1 0. L g (G SB ) = 1........ 1 0 1 1 (4.15) 18

4.1. Linear controller represents the grounded Laplacian. We use L g (G SB ) to give another representation of A SB and B SB, A SB = I N A 1 + L g (G SB ) A R N N, B SB = L g (G SB ) A R N N, (4.16) where denotes the Kronecker product. The definition of the Kronecker product and some basic properties can be found in the literature, for example in [11] and [14]. For the later analysis of the influence of the transmitted error e on the position and velocity errors x the transient behaviour of the closed loop will be investigated. Therefore the following Lemma gives a helpful statement to bound the norm of the state error by exploiting properties of the system matrix A SB. In the Lemma, conditions on the control parameters k and b are introduced, so that A SB is guaranteed to be diagonalizable. This is not the only possible control configuration in which it is possible to do event-triggered control. It is just the one being investigated here. The results can be adapted for general positive values of k and b, but this is not part of the thesis. Lemma 4.1.1 Suppose A SB is given as in (4.16) with b > 0, k > 1 4 λ max(l g )b, where L g is the grounded Laplacian of the symmetric bidirectional communication graph (4.15) and λ max ( ) denotes the eigenvalue with the largest real part. Denote the least stable eigenvalue of A SB as λ 1 (A SB ). Then, for all vectors v R N and all t 0 the inequality e A SB t v e Re(λ 1(A SB ))t c VSB v holds with c VSB = V SB V 1 and V is the matrix consisting of eigenvectors of A SB. SB Proof Firstly Theorem from [10] is used to give a formula for the eigenvalues of the grounded Laplacian of the symmetric bidirectional control architecture treated here. Therefore one sets the parameters in the formula of the Theorem to D = 1, l = l 1 = 1,...,N, I 0 (x 1 ) = I 0 (x ) = 0, I 1 (x 1 ) = 1 and concludes λ l (L g ) = ( 1 cos ) (l 1)π N + 1 l {1,...,N}. (4.17) Since A SB = I N A 1 + L g A has exactly the same structure as matrix A in Theorem 1 from [10], the characteristic polynomial of A SB can be derived as s j + λ l(l g )bs j + λ l (L g )k = 0, j = {l 1,l } = {1 1,1,...,N 1,N } (4.18) 19

4. Symmetric bidirectional control architecture and thus the N eigenvalues s j of A SB are s l1, = λ l(l g )b ( 1 ± 1 ) 4k λ l (L g )b. (4.19) As stated before the control parameters fulfill the condition k > 1 4 λ max(l g )b and therefore A SB has N different complex conjugated pairs of eigenvalues s l1, = λ l(l g )b ± I (4.0) where I is the imaginary part of s l1,. Thus A SB is diagonalizable with matrix V SB = [ ] v 11,v 1,...,v N,, i.e., ASB = V SB D SB VSB 1 where v l i is the eigenvector corresponding to the eigenvalue s li. Hence we can compute e ASBt v = e V SBD SB V 1 SB t v (4.1) and since e V SBD SB V 1 SB t = k=0 = V SB k=0 (V SB D SB VSB 1 t)k = k! (D SB t) k k! V 1 SB k=0 V SB (D SB t) k V 1 SB k! = V SBe D SBt V 1 SB (4.) it can be followed that e ASBt v = V SB e DSBt VSB 1 v e D SBt VSB VSB 1 }{{} v (4.3) c VSB Furthermore equation (4.17) guarantees that λ l (L g ) > 0 l {1,...,N}. Therefore the eigenvalues of A SB from (4.0) fulfill Re(s j ) < 0; j {1,...,N} and holds. e ASBt v e D SBt cvsb v e Re(λ 1(A SB ))t c VSB v (4.4) 0

4.1. Linear controller Through the formula in the latter proof one can see, that the value for Re(λ 1 (A SB )) can be computed by knowledge of only the control parameters k, b and the number of vehicles N, since Re(λ 1 (A SB )) = λ min(l g )b ( ) with π λ min (L g ) = 1 cos N+1. Furthermore [8] proofs that Re(λ 1 (A SB )) = π b for N >> 1. This computation is important since Re(λ 8N 1 (A SB )) will be used in the choice of the parameters of the trigger function. Lemma 4.1.1 comes up with the condition k > 1 4 λ max(l g )b. From (4.17) with l = N we know that λ max (L g ) 4. Hence we know 1 4 λ max(l g )b b and the implication k > b k > 1 4 λ max(l g )b holds. This means that fulfilling the condition k > b guarantees that the control parameters fulfill the assumptions of the Lemma without explicitly computing λ max (L g ). Since we use the triggering concept form [], the foregoing Lemma can be seen as a counterpart to Lemma 5.1 in this paper. The Lemma is crucial to state the following main theorem for the behavior of the platoon under linear event-triggered control in symmetric bidirectional architecture. Theorem 4.1. Assume the vehicle platoon of double integrator agents (3.1) with linear symmetric bidirectional event-based control architecture, resulting in closed loop dynamics (4.1) with b > 0 and k > 1 4 λ max(l g )b combined with the trigger function (3.6) with c 0,c 1 0, c 0 + c 1 > 0 and 0 < α < Re(λ 1 (A SB )). Then x = [ ] p 1, p 1,..., p N, p N converges to a ball around the origin with radius c r SB = c VSB N BSB 0 and Zeno behaviour is excluded. Re(λ 1 (A SB )) Proof The proof follows similar steps as the proof of theorem 5. in []. Firstly we can compute an upper bound on the norm of the state error vector t x(t) = easbt x(0) + e ASB(t s) B SB e(s)ds 0 e ASBt t x(0) + B SB e ASB(t s) e(s) ds. (4.5) Since the assumptions of the Theorem match the assumptions of Lemma 4.1.1, it holds that 0 e ASBt x(0) e Re(λ 1(A SB ))t c VSB x(0), e ASB(t s) e(s) e Re(λ 1(A SB ))(t s) c VSB e(s) (4.6) 1

4. Symmetric bidirectional control architecture and therefore it follows that x(t) e Re(λ 1(A SB ))t c VSB x(0) t + B SB e Re(λ 1(A SB ))(t s) c VSB e(s) ds. (4.7) 0 From (3.6) and the condition f i ( ) 0 we derive Inserting (4.8) in (4.7) leads to e(s) N h i (s) N ( c 0 + c 1 e αs). (4.8) x(t) e Re(λ 1(A SB ))t c VSB x(0) t + B SB c VSB e Re(λ 1(A SB ))(t s) N ( c 0 + c 1 e αs) ds 0 = e Re(λ 1(A SB ))t c VSB x(0) c + B SB c VSB N [ 0 Re(λ 1 (A SB )) ere(λ 1(A SB ))(t s) ] c t 1 Re(λ 1 (A SB )) + α ere(λ 1(A SB ))(t s) αs 0 = e Re(λ 1(A SB ))t c VSB x(0) c + B SB c VSB N ( 0 Re(λ 1 (A SB )) c 1 Re(λ 1 (A SB )) + α e αt c + 0 Re(λ 1 (A SB )) ere(λ 1(A SB ))t c + 1 Re(λ 1 (A SB )) + α ere(λ 1(A SB ))t. }{{}}{{} <0 <0 (4.9) Therefore for all t > 0, x(t) e Re(λ 1(A SB ))t c VSB x(0) + B SB c VSB N c 1 Re(λ 1 (A SB )) + α e αt + B SB c VSB N c 0 Re(λ 1 (A SB )). (4.30) Under the given assumptions, i.e., 0 < α < Re(λ 1 (A SB )), we see that lim x(t) B c SB c VSB N 0 t Re(λ 1 (A SB )) = r SB (4.31)

4.1. Linear controller and therefore x converges to a ball around the origin with radius r SB. The next step is to exclude Zeno behaviour. Therefore it is the goal to give a lower bound on the time τ i = t i k+1 ti k between two events of every agent i {1,...,N}. At time t i k the exact state information of agent i is transmitted to all of its neighbors. Thus it holds that [ h i (t i k ) = ei (t i k ) ] [ ] 0 e di (t i k ) = (4.3) 0 and therefore, for t i k t < ti k+1, we have t [ ] h i (t) ėi (s) t [ ] ds t i ė k di (s) (3.5) = ˆṗ i (s) ṗ i (s) ds t i p k i (s) t [ ] = edi (s) ds. (4.33) u i (s) The transmitted error is bounded by (3.6) so it is possible to compute ḣ i (t) [ ] = edi (t) e u i (t) (t) di + u i (t) h i (t) + u i (t) where t i k (3.6) c 0 + c 1 e αt + u i (t), (4.34) u i (t) = k ( p i 1 + p i p i+1 e i 1 + e i e i+1 ) b ( p i 1 + p i p i+1 e di 1 + e di e di+1 ) k (4 x(t) + 4 h i (t) ) + b (4 x(t) + 4 h i (t) ) 4 (k + b) ( x(t) + c 0 + c 1 e αt). (4.35) This bound on u i (t) holds for all i {1,,...,N}, although u N (t) is written down in a different way. This fact is easy to understand by assuming that u N (t) runs the same controller where the state error and the transmitted error of the following car is equivalent to 0. Plugging (4.9) in (4.35) leads to ( u i (t) 4 (k + b) r SB + k e αt + k 3 e Re(λ 1(A SB ))t + c 0 + c 1 e αt) (4.36) with k = B SB c VSB N c 1 Re(λ 1 (A SB )) + α, k 3 = c VSB x(0). (4.37) 3

4. Symmetric bidirectional control architecture Therefore by plugging (4.36) in (4.34) we get ḣ i (t) ( c 0 + c 1 e αt) (4 (k + b) + 1) ( ) + r SB + k e αt + k 3 e Re(λ 1(A SB ))t 4 (k + b). (4.38) In the following we will investigate two different cases, depending on c 0. Case1: c 0 = 0 For all t 0 and thus also for t i k t < ti k+1 it can be seen that ḣi (t) ḣi (0) = (c 0 + c 1 ) (4 (k + b) + 1) + (r SB + k + k 3 ) 4 (k + b). }{{} =:C 1 Therefore we know (4.39) h i (t) C 1 (t t i k ) (4.40) and the next event will not be triggered before time t that fulfills Hence the lower bound C 1 (t t i k ) = C 1τ i, min = c 0 c 0 + c 1 e αt. (4.41) τ i,min = c 0 /C 1 (4.4) on the time between two events of agent i for all i {1,,...,N} holds and Zeno behaviour is excluded in this case. Case: c 0 = 0 With c 0 = 0 in (4.38) for all t i k t < ti k+1 it holds that ḣi (t) ( ) c 1 e αt (4 (k + b) + 1) + k e αt + k 3 e Re(λ 1(A SB ))t 4 (k + b) ) c 1 e αti k (4 (k + b) + 1) + (k e αti k + k3 e Re(λ 1(A SB ))t i k 4 (k + b). }{{} =:C 1 (t i k ) In this case the next event will not be triggered before time t that fulfills (4.43) C 1 (t i k )(t t i k ) = C 1(t i k )τ i, min = c 1 e αt c 1 e αt. (4.44) 4

4.1. Linear controller This leads to the equation C 1 (t i k )τ i, min = c 1 e αt, i.e., ( ) ) (c 1 e αti k (4 (k + b) + 1) + k e αti k + k3 e Re(λ 1(A SB ))t i k 4 (k + b) τ i, min = c 1 e αt (4.45) and multiplying both sides with e αti k delivers ( ( ) ) c 1 (4 (k + b) + 1) + k + k 3 e α+re(λ 1(A SB ))t i k 4 (k + b) τ i, min = c 1 e ατ i, min. (4.46) The left side of the equation is a linear function in τ i, min with finite gain since α + Re(λ 1 (A SB )) < 0, while the right side exponentially decreases from c 1. Therefore there exists a lower bound τ i, min > 0 which solves this equation and Zeno behaviour is excluded for this case if c 1 > 0 and 0 < α < Re(λ 1 (A SB )). Combining Case 1 and Case it can be seen that Zeno behavior is excluded with the assumptions in the theorem. Since the case c 0 = 0 is allowed, it can be seen, that the radius of the ball around the origin can be set to r SB = 0 and thus the closed-loop dynamics become asymptotically stable although the communication is done in an event-based framework. Remark By exploiting the structure of B SB, 0 0 0 0 0 1 0 0 1 0 0 0 B SB =. 0..... 0 A. + 0..... 0 0 0 0 1 A 0 0 1 0 0 0 0 0 0 0 0. + 0.. 0 0 0 0 0 A. (4.47) 0 0 0 1 and the property of the spectral norm of Kronecker products, A B = A B, (4.48) it is possible to further investigate B SB, [ ] B SB A + A + A = 4 0 0 4 max {k,b}. (4.49) k b 5

4. Symmetric bidirectional control architecture Furthermore for N >> 1 it is known as stated above Re(λ 1 (A SB )) = π b 8N. (4.50) Therefore for N >> 1 the radius of the ball around the origin is r SB 3 N max {k,b} c VSB c 0 N 4.. Linear controller under disturbances π b. (4.51) Up to now the system was assumed to be completely undisturbed and the only error driving the system was the transmitted error. In this section a disturbance w i (t) acting on the acceleration of agent i is introduced, i.e., p i (t) = u i (t) + w i (t). (4.5) Therefore each agent s dynamics are extended from (3.1) to ] [ṗi (t) = p i (t) [ 0 1 0 0 ] [ ] pi (t) + ṗ i (t) [ ] 0 u 1 i + [ ] 0 w 1 i (t), [ ] pi (0) = ṗ i (0) [ ] pi,0 R. (4.53) v i,0 Again the linear symmetric bidirectional controller from the latter section is applied. Hence it is easy to compute a closed-loop state space representation for the state error dynamics ẋ(t) = A SB x(t) + B SB e(t) + B w w(t), x(0) = x 0 R N (4.54) with x(t), e(t), A SB, B SB from (4.11),(4.16), w = [ w 1,...,w N ] T and B w = I N [ ] 0 1 R N N. (4.55) The goal is now still to be able to guarantee convergence to a ball around the origin and to exclude Zeno behaviour although we have disturbances acting on the system. For the further analysis the signal w i (t) is assumed to be bounded and define w := max i w i (t) = max i sup t w i (t). Now the theorem guaranteeing these properties is stated. 6

4.. Linear controller under disturbances Theorem 4..1 Assume the vehicle platoon of disturbed double integrator agents (4.53) with linear symmetric bidirectional event-based control architecture, resulting in closed loop dynamics (4.54) with b > 0 and k > 1 4 λ max(l g )b being combined with the trigger function (3.6) with c 0 > 0, c 1 0 and 0 < α < Re(λ 1 (A SB )). Suppose the disturbance is bounded by w := max i w i (t). Then x = [ p 1, p 1,..., p N, p N ] converges to a ball around the origin with radius r SB,d = c V SB N Re(λ 1 (A SB )) ( B SB c 0 + w) and Zeno behaviour is excluded. Proof The goal is once again to bound x(t), t t x(t) = easbt x(0) + e ASB(t s) B SB e(s)ds + e ASB(t s) B w w(s)ds 0 0 t easbt x(0) + e ASB(t s) t B SB e(s)ds + e ASB(t s) B w w(s)ds. 0 0 }{{} ( ) (4.56) The first summand is already computed in the proof of Theorem 4.1., therefore the focus lies on the computation of an upper bound on ( ) now. Lemma 4.1.1 is applied again to compute t ( ) B w e Re(λ 1(A SB ))(t s) c VSB w(s) ds. (4.57) 0 From (4.55) and the boundedness assumption we derive B w = 1 and w(s) Nw. Hence, ( ) t Nwc VSB e Re(λ 1(A SB ))(t s) ds 0 = 1 Nwc VSB Re(λ 1 (A SB )) ( 1 e Re(λ 1(A SB ))t ) NwcVSB Re(λ 1 (A SB )). (4.58) Combining the bound on ( ) with the results from the proof of Theorem 4.1. the conclusion x(t) c VSB x(0) e Re(λ 1(A SB ))t c + B SB c VSB N 1 Re(λ 1 (A SB )) + α e αt NcVSB + Re(λ 1 (A SB )) ( B SB c 0 + w) = k 3 e Re(λ 1(A SB ))t + k e αt + r SB,d (4.59) 7

4. Symmetric bidirectional control architecture holds. Therefore it is shown, that the state vector converges to a ball around the origin with radius r SB,d = c V SB N Re(λ 1 (A SB )) ( B SB c 0 + w). [ To exclude Zeno behaviour we observe again how h i (t) = ei e di (t)] evolves between the two trigger times t i k and ti k+1. Using h i(t i k ) = 0 we know for t i k t < ti k+1 t [ ] h i (t) ėi (s) t [ ] ds = ˆṗ i (s) ṗ i (s) t i ė k di (s) ds t i p k i (s) t [ ] = e di (s) ds. (4.60) u i (s) w i (s) Therefore ḣi (t) can be upper bounded by ḣi (t) h i (t) + u i (t) + w i (t). (4.61) Since the controller is still the linear symmetric bidirectional controller, the result from the last proof, t i k u i (t) 4(k + b) ( x(t) + h i (t) ), (4.6) still holds and by plugging (4.6) in (4.61) we can compute ḣi (t) h i (t) (4(k + b) + 1) + w + 4(k + b) x(t). (4.63) Applying the result from equation (4.59) and the trigger rule from (3.6) the bound ḣi (t) ( c 0 + c 1 e αt) (4(k + b) + 1) ) +w + 4(k + b) (k 3 e Re(λ 1(A SB ))t + k e αt + r SB,d. (4.64) can be derived. Hence, for all t 0 the upper bound ḣi (t) (c 0 + c 1 ) (4(k + b) + 1) + w + 4(k + b) ( ) k 3 + k + r SB,d (4.65) }{{} =:C d with C d being constant holds and we know ) h i (t) C d (t t i k t : t i k t < ti k+1 (4.66) 8

4.. Linear controller under disturbances and the next event will not be triggered before t = t i k + τ i, min that fulfills C d τ i, min = c 0 c 0 + e αt. (4.67) Thus, Zeno behaviour is excluded by computing a lower bound τ i, min = c 0 C d > 0 (4.68) for the inter-execution times for all agents i {1,,...,N}. If one compares the result to the one in the undisturbed case one notices that it is not allowed anymore to choose c 0 = 0. The reason for the harder constraint c 0 > 0 is that the system can be driven by the disturbance independent of time. If we would now choose c 0 = 0 there would be some time t, so that t > t there is no transmitted error allowed but the system still moves, due to the disturbance. This would lead to Zeno behaviour. The case with c 0 = 0 and c 1 > 0 could be applied when one knows that the disturbance decreases exponentially with time. Corollary 4.. To keep the guaranteed lower bound on the inter-execution times τ min on the same value as in the undisturbed ( case with c 0 > 0 one has to choose ) NcVSB c 0,d = c 0 + β d τ i, min with β d = w 1 + 4(k + b). Re(λ 1 (A SB )) Proof From (4.39) and (4.65) we know C d = C 1 + β d. The goal is to keep τ i, min constant. Therefore one has to solve the equation which leads to c 0,d C 1 + β d = c 0 C 1 (4.69) c 0,d = c 0 + β d c 0 C 1 = c 0 + β d τ i, min. (4.70) 9

5. Predecessor-following control architecture In this chapter event-triggered control in a predecessor-following (PF) control architecture is investigated. To start the analysis the closed-loop dynamics for the state error is derived for the platoon being controlled by an event-based controller with predecessor-following architecture. Then the closed-loop behavior is analyzed for the special case of a linear controller with and without disturbances. This analysis will be similar to the one given in the foregoing chapter. In [8] the investigation leads to the recommendation to use a nonlinear controller when only information from the predecessor is available. Therefore we also give an analysis for a nonlinear event-triggered predecessor-following controller in this chapter. To compute the closed-loop dynamics for a general predecessor-following control architecture equations (3.1) and (3.4) are combined to p i (t) = f (p i (t) p i 1 (t) + (i 1,i) ) g(ṗ i (t) ṗ i 1 (t)), i {1,...,N}, t > t 0. (5.1) The event-triggered version of the controller uses a zero-order hold to estimate the velocity and a first-order hold for the position information which results in the event-based predecessor-following closed-loop p i (t) = f ( ˆp i + (t t i k ) ˆṗ i ˆp i 1 (t t i 1 k ) ˆṗ i 1 + (i 1,i) ) g( ˆṗ i ˆṗ i 1 ) with i {1,...,N}, t : t i k t < min{ti k+1,ti 1 k +1 } (5.) k := arg min t i k N:t i 1 k t i k ti 1 k. (5.3) k Introducing the same variables for the state error and doing the computations as in equations (4.5), (4.6) and (4.7) in Chapter 4 the closed-loop dynamics of the state error with event-based predecessor-following control architecture 31

5. Predecessor-following control architecture for all t > t 0 can be computed as p i (t) = f ( p i (t) p i 1 (t) + e i (t) e i 1 (t)) g ( p i (t) p i 1 (t) + e di (t) e di 1 (t) ) i {1,...,N}. (5.4) 5.1. Linear controller Firstly we analyze a linear controller with f (z) = kz and g(z) = bz, where k and b are positive constants. The resulting closed-loop dynamics p i (t) = k ( p i (t) p i 1 (t) + e i (t) e i 1 (t)) b ( p i (t) p i 1 (t) + e di (t) e di 1 (t) ) (5.5) can now be written in state space representation with ẋ(t) = A PF x(t) + B PF e(t), x(0) = x 0 R N. (5.6) x(t) = [ p 1 (t) p 1 (t)... p N (t) p N (t) ] T, e(t) = [ e 1 (t) e d1 (t)... e N (t) e dn (t) ] T R N, (5.7) as well as the matrices 1 0. A PF = I N A 1 + 1........ A R N N (5.8) 0 1 1 and 1 0. B PF = 1........ A R N N (5.9) 0 1 1 3

5.1. Linear controller where the auxiliary matrices A 1 = [ ] [ ] 0 1 0 0, A 0 0 =. (5.10) k b are defined as in (4.14). The next Lemma gives a similar tool for the predecessor-following scheme as we have through Lemma 4.1.1 for the symmetric bidirectional architecture. To state the Lemma, results about the spectrum of A PF from [8] are used. Lemma 5.1.1 Assume A PF is given as in (5.8) with b,k > 0. Then the least stable eigenvalue of A PF is λ 1 (A PF ) = b+ b 4k. Moreover, for all v R N and all t 0 the inequality e APFt v c VPF β(µ)e µt v with 0 < µ < Re(λ 1 (A PF )), { } β = max t N 1 k=0 k! tk e(µ+re(λ 1(A PF )))t and c VPF = V PF VPF 1 holds, where V PF is a non-singular matrix transforming A PF to a Jordan matrix. Proof The value for the least stable eigenvalue of A PF is stated in Theorem 1 from [8]. Since A PF is quadratic there exists a non-singular matrix V PF so that VPF 1 A PFV PF = J PF where J PF is a Jordan matrix. From [8] we know that λ 1 (A PF ) occurs with multiplicity N. Therefore the biggest Jordan block is the one corresponding to λ 1 (A PF ) which will be denoted by J (1) PF. Thus it holds that e J PFt e J(1) PF t. (5.11) Due to [15] e J(1) PF t can be computed as t 1 t N 1 (N 1)!. PF t = e λ 1(A PF )t........... t 0 1 e J(1) (5.1) and therefore e J PFt e J(1) PF t e Re(λ 1(A PF ))t N 1 t k k!. (5.13) k=0 33