Decentralized Event-Triggering for Control of Nonlinear Systems

Similar documents
Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

ABSTRACT. Pavankumar Tallapragada, Doctor of Philosophy, 2013

A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Lyapunov Stability Theory

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

An asymptotic ratio characterization of input-to-state stability

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Near-Potential Games: Geometry and Dynamics

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

Networked Control Systems, Event-Triggering, Small-Gain Theorem, Nonlinear

ADAPTIVE control of uncertain time-varying plants is a

Near-Potential Games: Geometry and Dynamics

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Hybrid Systems Course Lyapunov stability

Chap. 3. Controlled Systems, Controllability

Output Input Stability and Minimum-Phase Nonlinear Systems

Multi-Robotic Systems

Distributed Receding Horizon Control of Cost Coupled Systems

Self-triggered and team-triggered control of. networked cyber-physical systems

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Chapter III. Stability of Linear Systems

Event-triggered control subject to actuator saturation

Balanced Truncation 1

Convergence Rate of Nonlinear Switched Systems

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Control Lecture 5: Stability Analysis II

Event-separation properties of event-triggered control systems

1 The Observability Canonical Form

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Stabilization of persistently excited linear systems

TOPOLOGICAL EQUIVALENCE OF LINEAR ORDINARY DIFFERENTIAL EQUATIONS

AQUANTIZER is a device that converts a real-valued

1 Lyapunov theory of stability

Event-Triggered Control Systems. Seyed Hossein Mousavi. Doctor of Philosophy. Control Systems. Department of Electrical and Computer Engineering

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

arxiv: v3 [math.ds] 22 Feb 2012

Input to state Stability

Input to state Stability

A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology

L 2 -induced Gains of Switched Systems and Classes of Switching Signals

MOST control systems are designed under the assumption

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

STATE AND OUTPUT FEEDBACK CONTROL IN MODEL-BASED NETWORKED CONTROL SYSTEMS

Converse Lyapunov theorem and Input-to-State Stability

Periodic Sinks and Observable Chaos

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

Empirical Processes: General Weak Convergence Theory

Solution of Linear State-space Systems

NOWADAYS, many control applications have some control

Gramians based model reduction for hybrid switched systems

Stability of Impulsive Switched Systems in Two Measures

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Monotone Control System. Brad C. Yu SEACS, National ICT Australia And RSISE, The Australian National University June, 2005

A Simple Self-triggered Sampler for Nonlinear Systems

Modeling and Analysis of Dynamic Systems

Exam. 135 minutes, 15 minutes reading time

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Nonlinear Control Systems

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Event-Based Control of Nonlinear Systems with Partial State and Output Feedback

Stability in the sense of Lyapunov

w T 1 w T 2. w T n 0 if i j 1 if i = j

Comments on integral variants of ISS 1

Hybrid Systems Techniques for Convergence of Solutions to Switching Systems

Georgia Institute of Technology Nonlinear Controls Theory Primer ME 6402

QSR-Dissipativity and Passivity Analysis of Event-Triggered Networked Control Cyber-Physical Systems

THE INVERSE FUNCTION THEOREM

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

The Rationale for Second Level Adaptation

On a small-gain approach to distributed event-triggered control

Some Background Math Notes on Limsups, Sets, and Convexity

Gradient Clock Synchronization

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop

Outline. Input to state Stability. Nonlinear Realization. Recall: _ Space. _ Space: Space of all piecewise continuous functions

To sample or not to sample: Self-triggered control for nonlinear systems

An event-triggered distributed primal-dual algorithm for Network Utility Maximization

Lecture Notes of EE 714

Passivity-based Stabilization of Non-Compact Sets

Design of hybrid control systems for continuous-time plants: from the Clegg integrator to the hybrid H controller

An Optimization-based Approach to Decentralized Assignability

Global Stability and Asymptotic Gain Imply Input-to-State Stability for State-Dependent Switched Systems

Stability of Stochastic Differential Equations

Observer design for a general class of triangular systems

arxiv: v1 [cs.sy] 20 Nov 2017

Nonlinear Control Systems

Robust distributed linear programming

An Event-Triggered Consensus Control with Sampled-Data Mechanism for Multi-agent Systems

Deterministic Dynamic Programming

Event-based Stabilization of Nonlinear Time-Delay Systems

On Backward Product of Stochastic Matrices

MCE693/793: Analysis and Control of Nonlinear Systems

Transcription:

Decentralized Event-Triggering for Control of Nonlinear Systems Pavankumar Tallapragada and Nikhil Chopra arxiv:32.49v3 [cs.sy] 3 Jun 24 Abstract This paper considers nonlinear systems with full state feedback, a central controller and distributed sensors not co-located with the central controller. We present a methodology for designing decentralized asynchronous event-triggers, which utilize only locally available information, for determining the time instants of transmission from the sensors to the central controller. The proposed design guarantees a positive lower bound for the inter-transmission times of each sensor, while ensuring asymptotic stability of the origin of the system with an arbitrary, but priorly fixed, compact region of attraction. In the special case of Linear Time Invariant (LTI) systems, global asymptotic stability is guaranteed and scale invariance of intertransmission times is preserved. A modified design method is also proposed for nonlinear systems, with the addition of eventtriggered communication from the controller to the sensors, that promises to significantly increase the average sensor intertransmission times compared to the case where the controller does not transmit data to the sensors. The proposed designs are illustrated through simulations of a linear and a nonlinear example. I. INTRODUCTION State based aperiodic event-triggering is receiving increased attention (a representative list of the recent literature includes [] [8]) as an alternative to the traditional time-triggering (example: periodic triggering) in sampled data control systems. In event based control systems, a state or data dependent event-triggering condition implicitly determines time instances at which control is updated or when a sensor transmits data to a controller. Such updates or transmissions are in general aperiodic and depend on the system state. Such a paradigm is particularly appealing in control systems with limited computational and/or communication resources. Much of the literature on event-triggered control utilizes the full state information in the triggering conditions. However, in two very important classes of problems full state information is not available to the event-triggers. These are systems with decentralized sensing and/or dynamic output feedback control. In the latter case, full state information is not available even when the sensors and the controller are centralized (colocated). In systems with decentralized sensing, each individual sensor has to base its decision to transmit data to a central controller only on locally available information. These two This work was partially supported by the by the Office of Naval Research under grant number N436 and by National Science Foundation through grant number 23227. Pavankumar Tallapragada is with the Department of Mechanical Engineering, University of Maryland, College Park, 2742 MD, USA pavant@umd.edu N. Chopra is with the Department of Mechanical Engineering and The Institute for Systems Research, University of Maryland, College Park, 2742 MD, USA nchopra@umd.edu classes of problems are receiving attention in the community only recently - [9] [6] (decentralized sensing) and [7] [23] (output feedback control). This paper is an important addition to the limited literature on decentralized event-triggering in control systems with distributed sensors. The basic contribution of this paper is a methodology for designing implicitly verified decentralized event-triggers for control of nonlinear systems. The system architecture we consider is one with full state feedback but with the sensors distributed and not co-located with a central controller. The proposed design methodology provides event-triggers that determine when each sensor transmits data to a central controller. The event-triggers are designed to utilize only locally available information, making the transmissions from the sensors asynchronous. The proposed design guarantees asymptotic stability of the origin of the system with an arbitrary, but fixed a priori, compact region of attraction. It also guarantees a positive lower bound for the inter-transmission times of each sensor individually. In the special case of Linear Time Invariant (LTI) systems, global asymptotic stability is guaranteed and scale invariance of inter-transmission times is preserved. For nonlinear systems, we also propose a variant with the addition of event-triggered communication from the central controller to the sensors. This variant shows promise of significantly increasing the average sensor inter-transmission times compared to the case when the controller does not transmit data to the sensors. Although our results fall short of mathematically proving an increase in the average sensor inter-transmission times, some suggestive arguments are provided. Nevertheless, results are provided that guarantee or suggest design choices that would increase lower bounds on inter-transmission times. In the literature, distributed event-triggered control was studied with the assumption of weakly coupled subsystems in [3], [4] (finite L p stable systems) and in [5] (ISS and iiss systems), which allowed the design of event-triggers depending on only local information. While we do not make an assumption on the coupling strength, we do assume (in the nonlinear case) the knowledge of a global ISS Lyapunov function - an assumption that is sometimes seen as restrictive. However, note that we make the assumption that the ISS Lyapunov function is defined globally purely for ease of exposition. Since our results for the nonlinear systems guarantee only semi-global asymptotic stability, it is sufficient to have a semi-global ISS Lyapunov function, a much less severe requirement and one which may possibly be derived from Lyapunov functions that guarantee asymptotic stability (see [24] for example). Further, with the event-triggers of [3], [4], no positive lower bound for inter-transmission times exists in

2 the global or even semi-global sense (see Remark in the sequel). Our proposed scheme is more closely related, at least in its aims and ideas, with [] [2] and with [9] to a lesser extent. Our proposed scheme shares a similarity with [9] in the use of timers (or a dwell time) in the event-triggers. One may view such inclusion of an implicitly verified dwell time as a combination of time-triggering and event-triggering. In the scheme proposed in [9], decentralized asynchronous event-triggering at the sensors is only utilized to request the central controller to seek a synchronous snapshot of all the sensors data. In order to reject closely timed requests by the sensors, the controller utilizes a timer whose expiration time threshold is determined from the properties of the corresponding centralized event-triggered control system. Our proposed event-triggers also include similar timers - however, their use, location and the computation of the expiration time thresholds are qualitatively different. One could say that the proposed scheme complements [9] with the use of timers in each of the event-triggers at the sensors and as a result allows the controller to depend only on the asynchronously transmitted data. These differences would be apparent in the course of the paper and in particular in Remark 2. [] [2] proposed an interesting scheme for decentralized and asynchronous eventtriggering with fixed and time-varying thresholds. However, the scheme guarantees only semi-global practical stability even for linear systems if the sensors do not to listen to the central controller. To the best of our knowledge, our work is unique in that our decentralized event-triggering scheme requires only unidirectional communication from the sensors to the controller and still guarantees semi-global asymptotic stability (global for LTI systems). As a result, no receivers are required at the sensor nodes - a feature that is very useful in low energy applications. In the dynamic output feedback control literature, [7] [2] consider asynchronous and decentralized event-triggering for Linear Time Invariant (LTI) systems. Again, the method in [7] can guarantee only semi-global practical stability. In [8] [2], we have proposed a method that guarantees global asymptotic stability and positive minimum inter-transmission times. In fact, the basic principle in [8] [2] is also used in this paper, with additional considerations for the nonlinear systems. The portion on the linear systems in this paper was presented in [6]. The rest of the paper is organized as follows. Section II describes and formally sets up the problem under consideration. In Section III, the design of asynchronous decentralized event-triggers for nonlinear systems is presented - without, and then with, feedback from the central controller. Section IV presents the special case of Linear Time Invariant (LTI) systems. The proposed design methodology is illustrated through simulations in Section V and finally Section VI provides some concluding remarks. II. PROBLEM SETUP Consider a nonlinear control system ẋ = f(x, u), x R n, u R m () with the feedback control law u = k(x + x e ) (2) where x e is the error in the measurement of x. In general, the measurement error can be due to many factors such as sensor noise and quantization. However, we consider measurement error that is purely a result of sampling of the sensor data x. Before going into the precise definition of this measurement error, we first describe the broader problem. First, let us express () as a collection of n scalar differential equations ẋ i = f i (x, u), x i R, i {, 2,..., n} (3) where x = [x, x 2,..., x n ] T. In this paper we are concerned with a distributed sensing scenario where each component, x i, of the state vector x is sensed at a different location. Although the i th sensor senses x i continuously in time, it transmits this data to a central controller only intermittently. In other words, the controller is a sampled-data controller that uses intermittently transmitted/sampled sensor data. In particular, we are interested in designing an asynchronous decentralized sensing mechanism based on local event-triggering that renders the origin of the closed loop system asymptotically stable. To precisely describe the sampled-data nature of the problem, we now introduce the following notation. Let {t xi j } be the increasing sequence of time instants at which x i is sampled and transmitted to the controller. The resulting piecewise constant sampled signal is denoted by x i,s, that is, x i,s (t) x i (t xi j ), t [txi j, txi j+ ), j {,, 2,...} (4) The sampled data, x i,s, may be viewed as inducing an error in the the measurement of the continuous-time signal, x i. This measurement error is denoted by x i,e x i,s x i = x i (t xi j ) x i(t), t [t xi j, txi j+ ) Finally, we define the sampled-data vector and the measurement error vector as x s [x,s, x 2,s,..., x n,s ] T, x e [x,e, x 2,e,..., x n,e ] T Note that, in general, the components of the vector x s are asynchronously sampled components of the plant state x. The components of x e are also defined accordingly. In time-triggered implementations, the time instants t xi j are pre-determined and are commonly a multiple of a fixed sampling period. However, in event-triggered implementations the time instants t xi j are determined implicitly by a state/data based triggering condition at run-time. Consequently, an eventtriggering condition may result in the inter-sample times t xi j+ txi j to be arbitrarily close to zero or it may even result in the limit of the sequence {t xi j } to be a finite number (Zeno behavior). Thus for practical utility, an event-trigger has to ensure that these scenarios do not occur. Thus, the problem under consideration may be stated more precisely as follows. For the n sensors, design event-triggers that depend only on local information and implicitly define the non-identical sequences {t xi j } such that (i) the origin of the closed loop system is rendered asymptotically stable and (ii) inter-sample (inter-transmission) times t xi j+ txi j are lower bounded by a positive constant.

3 Finally, regarding the notation, in this paper. denotes the Euclidean norm of a vector. The state variables and other functions of time are frequently referred to without the time argument when there is no ambiguity. Similarly, some functions of variables other than time are also sometimes referred to without their arguments when there is no ambiguity. In the next section, the main assumptions are introduced and the event-triggering for the decentralized sensing architecture is developed. III. DECENTRALIZED ASYNCHRONOUS EVENT-TRIGGERING In this section, the main assumptions are introduced and the event-triggers for the decentralized asynchronous sensing problem are developed. (A) Suppose f(, k()) = and that the closed loop system ()-(2) is Input-to-State Stable (ISS) with respect to measurement error x e. That is, there exists a smooth function V : R n R as well as class K functions α, α 2, α and γ i for each i {,..., n}, such that α ( x ) V (x) α 2 ( x ) V x f(x, k(x + x e)) α( x ), if γ i ( x i,e ) x, i. (A2) The functions f, k and γ i, for each i {,..., n}, are Lipschitz on compact sets. Note that the standard ISS assumption involves a single condition γ( x e ) x instead of the n conditions: γ i ( x i,e ) x, for i {,..., n}, in (A). Given a function γ(.) in the standard ISS assumption, one may define γ i (.) as ( ) xi,e γ i ( x i,e ) = γ, i {,..., n} θ i where θ i (, ) such that θ 2 = θ 2 i. Then, the n conditions in (A) are equivalent to x i,e θ i γ ( x ). Thus, x e = n x i,e 2 n θi 2γ ( x ) γ ( x ) which is the condition in the standard ISS assumption. Similarly, given (A) one may pick γ(.) = γ i (.) for any i to get the standard ISS assumption, although in practice it may be possible to choose a less conservative γ(.). In this section, our aim is to constructively show that decentralized asynchronous event-triggering can be used to asymptotically stabilize x = (the trivial solution or the origin) with a desired region of attraction while also guaranteeing positive minimum inter-sample times. Further, without loss of generality, the desired region of attraction may be assumed to be a compact sub-level set S(c) of the Lyapunov function V in (A). Specifically, S(c) is defined as S(c) = {x R n : V (x) c} (5) A continuous function α : [, ) [, ) is said to belong to the class K if it is strictly increasing, α() = and α(r) as r [25]. A. Centralized Asynchronous Event-Triggering The proposed design of decentralized asynchronous eventtriggering progresses in stages. In the first stage, centralized event-triggers for asynchronous sampling of the sensors are proposed in the following lemma. One of the key steps in the result is choosing linear bounds on the functions γ i (.) on appropriately defined sets E i. Given that x S(c), we define the sets E i over which the error bounds in (A) are still satisfied, that is, E i (c) = {x i,e R : x i,e γ i ( x ), x S(c)} = {x i,e R : x i,e max x S(c) {γ i ( x )}} (6) In particular, since x i are each scalars, E i (c) = [ r i (c), r i (c)], with r i (c) = max x S(c) {γ i ( x )} Then, by (A2), for each c and each i {,..., n}, there exist positive constants M i (c) such that γ i ( x i,e ) M i (c) x i,e, x i,e E i (c) (7) Lemma. Consider the closed loop system ()-(2) and assume (A) and (A2) hold. Suppose that the event-triggers that determine the sampling instants, {t xi j }, for each i {,..., n}, ensure x i,e M i (c) x for all time t, where M i (c) are given by (7) and c is an arbitrary constant. Then, the origin is asymptotically stable with S(c), given by (5), included in the region of attraction. Proof: Suppose x() S(c) is an arbitrary point, we have to show that the trajectory x(.) asymptotically converges to zero. Next, by assumption, the sampling instants are such that for each i {,..., n}, x i,e M i (c) x for all time t. Then, for all time t, (7) implies γ i ( x i,e ) M i (c) x i,e x, x S(c) Consider the ISS Lyapunov function V (.) in (A), which is a function of the state x. Letting M(c) = {x S(c), x e R n : x i,e M i (c) x, i} the time derivative of the function V along the trajectories of the closed loop system, with a restricted domain, V (x, xe ) : M(c) R can be upper-bounded as V (x, x e ) α( x ), [x T, x T e ] T M(c) Thus, in particular, the closed loop system is dissipative on the sub-level set, S(c), of the Lyapunov function V. Therefore, the origin is asymptotically stable with S(c) included in the region of attraction. The lemma does not mention a specific choice of eventtriggers but rather a family of them - all those that ensure the conditions x i,e M i (c) x are satisfied. Thus, any decentralized event-triggers in this family automatically guarantee asymptotic stability with the desired region of attraction. To enforce the conditions x i,e M i (c) x strictly, event-triggers at each sensor would need to know x, which is possible only if we have centralized information. One obvious way to

4 decentralize these conditions is to enforce x i,e M i (c) x i. However, such event-triggers cannot guarantee any positive lower bound for the inter-transmission times, which is not acceptable. So, we take an alternative approach, in which the next step is to derive lower bounds for the inter-transmission times when the conditions in Lemma are enforced strictly. Before analyzing the lower bounds for the inter-transmission times that emerge from the event-triggers in Lemma, we introduce some notation. First, recall that under Assumption (A), f(, k()) =. Noting that for each c the set S(c) contains the origin, Assumption (A2) implies that there exist Lipschitz constants L(c) and D(c) such that f(x, k(x + x e )) L(c) x + D(c) x e (8) for all x S(c) and for all x e satisfying x i,e M i (c) x, for each i. Similarly, there exist constants L i (c) and D i (c) for i {, 2,..., n} such that f i (x, k(x + x e )) L i (c) x + D i (c) x e (9) for all x S(c) and for all x e satisfying x i,e M i (c) x, for each i. Next, we introduce a function τ defined as τ(ω, a, a, a 2 ) = {t : φ(t, ) = ω} () where a, a, a 2 are non-negative constants and φ(t, φ ) is the solution of φ = a + a φ + a 2 φ 2, φ(, φ ) = φ Lastly, the functions Γ i for i {,..., n} are defined as Γ i (w i, W i, c) τ(w i, a,i, a,i, a 2,i ) () where the function τ is given by () and a,i = L i (c) + D i (c)w i, a,i = L(c) + D i (c) + D(c)W i, a 2,i = D(c) Lemma 2. Consider the closed loop system ()-(2) and assume (A2) holds. Let c > be any arbitrary known constant. For i {,..., n}, let < w i M i (c) be any arbitrary constants and let W i = j= w 2 j wi 2. Suppose the sampling instants are such that x i,e w i x for each i {,..., n} for all time t t. Finally, assume that x(t ) belongs to the compact set S(c). Then, for all t t, the time required for x i,e to evolve from to w i x is lower bounded by T i = Γ i (w i, W i, c) > (2) where the functions Γ i are given by (). Proof: By assumption, x(t ) belongs to a known compact set S(c) and x i,e w i x M i (c) x for each i for all time t t. Then, Lemma guarantees that x(t) S(c) for all time t t. Thus, (8) and (9) hold for all t t. Now, letting ν i x i,e / x and by direct calculation we see that for i {,..., n} dν i dt = (xt i,e x i,e) /2 x T i,eẋi,e xt ẋ x i,e x x 3 x i,e ẋ i,e + x ẋ x i,e x i,e x x 3 L i(c) x + D i (c) x e x + ( L(c) x + D(c) xe ) x i,e x 2 where for x i,e = the relation holds for all directional derivatives. Next, notice that x e = j=n j=n νj 2 x wj 2 wi 2 + ν2 i W i + ν i j= j= where the condition that ν i w i, the definition of W i and the triangle inequality property have been utilized. Thus, dν i dt L i(c) + L(c)ν i + ( D i (c) + D(c)ν i ) (Wi + ν i ) = a,i + a,i ν i + a 2,i ν 2 i The claim of the Lemma now directly follows. Now, by combining Lemmas and 2, we get the following result for the centralized asynchronous event-triggering. Theorem. Consider the closed loop system ()-(2) and assume (A)-(A2) hold. Suppose the i th sensor transmits its measurement to the controller whenever x i,e w i x, where < w i M i (c), with M i (c) given by (7) and c any arbitrary constant. Then, the origin is asymptotically stable with S(c) included in the region of attraction and the intertransmission times of each sensor have a positive lower bound given by T i in (2). Proof: The triggering conditions ensure that x i,e w i x M i (c) x for all t >. Thus, Lemma guarantees x(t) S(c) for all t and that the origin is asymptotically stable with S(c) included in the region of attraction. Since S(c) is positively invariant, Lemma 2 guarantees a positive lower bound for the inter-transmission times. B. Decentralized Asynchronous Event-Triggering Now, turning to the main subject of this paper, in the decentralized sensing case, unlike in the centralized sensing case, no single sensor has knowledge of the exact value of x from the locally sensed data. We may let the event-trigger at the i th sensor enforce the more conservative condition x i,e w i x i and still satisfy the assumptions of Lemma, though such a choice cannot guarantee a positive minimum inter-sample time. We overcome this problem through the following observation. The event-triggers in Theorem, t xi j+ = min{t txi j : x i,e w i x }, i {,..., n} (3) can be equivalently expressed as t xi j+ = min{t txi j + T i : x i,e w i x } (4) where T i are the estimates of positive inter-sample times provided by Lemma 2 in (2). In the latter interpretation, a

5 minimum dwell time is explicitly enforced, only after which, the state based condition is checked. Now, in order to let the event-triggers depend only on locally sensed data, one can let the sampling times, for i {,..., n}, be determined as t xi j+ = min{t txi j + T i : x i,e w i x i } (5) where T i are given by (2). This allows us to implement decentralized asynchronous event-triggering. Remark. The event-triggers proposed in [3], [4] take the form of (5) with T i =. Of course the parameters w i are computed in a different manner. Nevertheless, event-triggers of the form (5), with T i =, do not in general guarantee a positive lower bound for the inter-transmission times in the global or even the semi-global sense irrespective of whether the system is linear or nonlinear. This is due to the fact that, even under a weak coupling assumption, the set {x R n : x i = } is in general not an equilibrium set. In [4] a modified version of the event-triggers is also presented, which may be interpreted as including timers. However, in contrast to our proposed scheme, the timers in [4] enforce a maximum inter-transmission time. Remark 2. Although at first sight our approach of explicitly enforcing a lower bound on inter-transmission times may seem similar to that of [9], there are important differences. The most important difference stems from the fact that in [9], the control input to the plant is always based on synchronously sampled data of the decentralized sensors. This allows the controller to utilize the lower threshold for inter-event times of the centralized control system, as that of [], to reject very closely timed requests by the sensors. In contrast, our proposed methodology allows the controller to rely only on asynchronously sampled data. Further, the event-trigger at each of the sensors utilizes a lower threshold for the inter-transmission times. These differences necessitate the computation of the lower thresholds for the intertransmission times as in Lemma 2 instead of as in []. The following theorem is the core result of this paper and it shows that by appropriately choosing the constants T i and w i, the event triggers, (5), guarantee asymptotic stability of the origin while also explicitly enforcing a positive lower bound for inter-sample times. Theorem 2. Consider the closed loop system ()-(2) and assume (A) and (A2) hold. Let c be an arbitrary known constant. For each i {, 2,..., n}, let w i be a positive constant such that w i M i (c), where M i (c) is given by (7) and T i be given by (2). Suppose the sensors asynchronously transmit the measured data at time instants determined by (5) and that t xi for each i {, 2,..., n}. Then, the origin is asymptotically stable with S(c) included in the region of attraction and the inter-transmission times of each sensor are explicitly enforced to have a positive lower threshold. Proof: The statement about the positive lower threshold for inter-transmission times is obvious from (5) and only asymptotic stability remains to be proven. This can be done by showing that the event-triggers (5) are included in the family of event-triggers considered in Lemma. From the equivalence of (3) and (4), it is clearly true that x i,e w i x for t [t xi j, txi j + T i ], for each i {, 2,..., n} and each j. Next, for t [t xi j + T i, t xi j+ ], (5) enforces x i,e w i x i, which implies x i,e w i x since x i x. Therefore, the event-triggers in (5) are included in the family of eventtriggers considered in Lemma. Hence, x (the origin) is asymptotically stable with S(c) included in the region of attraction. Remark 3. Although the assumption that t xi, for each i, in Theorem 2 has not been used in the proof explicitly, it serves two key purposes - avoiding having the sensors send their first transmissions of data synchronously; and for the controller to have some latest sensor data to compute the controller output at t =. Remark 4. In Theorem 2, the parameters w i cannot be chosen in a decentralized manner unless M i (c) and hence c is fixed a priori. In other words, the desired region of attraction S(c) has to be chosen at the time of the system installation. This can potentially lead to the parameters w i to be chosen conservatively to guarantee a larger region of attraction. One possible solution is to let the central controller communicate the parameters w i to the sensors at t =. In any case, for t >, the sensors only have to transmit their data to the controller and not have to receive any communication. Apart from the fact that the set S(c) is chosen a priori, conservativeness in transmission frequency may also be introduced because the Lipschitz constants of the nonlinear functions γ i (.), (7), are not updated after their initialization despite knowing that the system state is progressively restricted to smaller and smaller subsets of S(c). Although we started from the idea that energy may be saved by making sure that sensors do not have to listen, the cost of increased transmissions may not be in its favor. Thus, we now describe a design where the central controller intermittently communicates updated w i and T i to the event-triggers. C. Decentralized Asynchronous Event-Triggering with Intermittent Communication from the Central Controller The basic idea of the design with bi-directional communication (sensors to controller and controller to the sensors) is simple. Whenever the central controller determines that the state is confined to a sufficiently small positively invariant subset of S(c), the parameters w i, T i and c are updated, which the sensor nodes use as in the previous subsection. Of course, the controller has access only to the asynchronously sampled data, x s. As a result, the controller can only determine an upper bound for the Lyapunov function of the system state. Recall from the proof of Theorem 2 that the event-triggers (5) ensure that x(t) R(x s (t)) for all t, where R(x s ) {x R n : x i,s x i w i x, i {,..., n}} (6) Since the controller knows the parameters used by each eventtrigger, it may compute an upper bound for the Lyapunov

6 function of the system state, given x s, as V(x s ) max{v (x) : x R(x s )} (7) Note that the maximum exists and is finite for finite x s because V (x) is a continuous function of x and R is a closed and bounded set. Now, let {t V j } be the sequence of time instants at which V is sampled and the sensor event-trigger parameters w i and T i are updated. Then the idea here is to determine the sequence {t V j } by an event-trigger running at the central controller, namely, t V j+ = min{t t V j + T : V(x s (t)) ρv(x s (t V j ))} (8) where T > (a positive dwell time) and < ρ < are arbitrary constants. The initial condition t V = and V(x s (t V )) = c may be chosen, where c determines the region of attraction S(c). Thus, with a slight abuse of notation, the sampled version of V is denoted by V s (t) V(x s (t V j )), t [t V j, t V j+), V s () = c (9) where c > is an arbitrary constant, t V j are given by (8) and V is given by (7). Note that since w i and T i are updated at the time instants t V j, they are time varying, piece-wise constant parameters, that is w i (t) = w i (t V j ), T i (t) = T i (t V j ), t [t V j, t V j+) (2) Note that the specification of w i (t V j ) and T i(t V j ) still remains. Let us also define the time varying piecewise constant auxiliary signals W i (t) = wj 2(t) wi 2 (t) (2) j= The sensor transmissions are assumed to be triggered by the time varying analogue of (5), that is t xi j+ = min{t txi j + T i (t) : x i,e (t) w i (t) x i (t) } (22) As one might expect from the results of the previous subsection, between any two updates of V s, that is for t [t V j, tv j+ ) for any j, the Lyapunov function is guaranteed to decrease monotonously and the inter-transmission times of the i th sensor are lower bounded by T i (t V j ) = Γ i(w i (t V j ), W i(t V j ), V(tV j )), where the function Γ i is given by (). Remark 5. Note from the definition of Γ i that Γ i increases as w i increases, and as W i or V decrease. Thus, although T i (t V j ) > for each i and each j, in the absence of further information one cannot rule out the possibility of lim T i(t V j ). Hence, let us make the j following assumption. (A3) lim(γ i (s)/s) (/g i ) > for each i {,..., n}. s Remark 6. As S(c ) S(c 2 ) if c c 2, M i (.) in (7) are assumed, without loss of generality, to be non-increasing functions of c. Similarly, L i (c), D i (c), L(c) and D(c) (defined in (8), (9)) are assumed, without loss of generality, to be nondecreasing functions of c. Theorem 3. Consider the closed loop system ()-(2), (8) and (22). Suppose (A) and (A2) hold. For each i {, 2,..., n}, let w i (t) and T i (t) be positive piecewise-constant signals satisfying (2) with T i (t) = Γ i (w i (t), W i (t), V s (t)). Further, for each i, let w i (t) be non-decreasing in time satisfying < w i (t) M i (V s (t)) for all t. Suppose the sensors asynchronously transmit the measured data at time instants determined by (22) and that t xi for each i {, 2,..., n}. Then, the origin is asymptotically stable with S(c) included in the region of attraction. Proof: First, notice that V s (t) is non-increasing in time. This implies that for each i, M i (V s (t)) is non-decreasing in time and a non-decreasing w i (t) satisfying w i (t) M i (V s (t)) exists. Clearly, the Lyapunov function evaluated at the state of the system is at all times lesser than the piecewise constant and non-increasing signal V s. Thus, x S(V s ) at all times, where S(.) is given by (5). Given that T i (t) = Γ i (w i (t), W i (t), V s (t)) and sensor event-triggers given by (22), one can conclude from arguments similar to those in Theorem 2 that the origin of the closed loop system is asymptotically stable with S(V s ()) = S(c) included in the region of attraction. From (8), it is clear that the inter-transmission times {t V j+ tv j } are lower bounded by T >. We would also like the inter-transmission times of each sensor to have a uniform positive lower bound. Corollary (Corollary to Theorem 3). In addition, assume (A3) holds. Then, the inter-transmission times of each sensor has a uniform positive lower bound. Proof: Since by assumption w i (t) > for all t, Lemma 2 guarantees that T i (t) = Γ i (w i (t), W i (t), V s (t)) >. Now we show that T i (t) has a uniform positive lower bound. Since the signal V s is non-increasing, Remark 6 guarantees that M i (V s (t)) are non-decreasing in time. Again by assumption in Theorem 3, w i (t) > is non-decreasing in time such that w i (t) M i (V s (t)) (note that such a w i (t) always exists). In other words, w i (t) w i () for all t. Next, Assumption (A3) means that lim M i (c) g i. Then, from the definition of c M i (c), in (7), M i (c) g i for all c. Thus, w i (t) g i for all t and there exist constants h i > such that W i (t) h i for all t. Finally, Remark 5 implies that for all t, T i (t) = Γ i (w i (t), W i (t), V s (t)) Γ i (w i (), h i, V s ()) > Remark 7. Assumption (A3) is not restrictive and can be easily overcome if it is not satisfied, under the assumption that (A2) holds. Assumption (A2) implies that for each s >, there exists a g i ( s) > such that γ i (s) (/g i ( s))s, for all s [, s]. If a function γ i does not satisfy the Assumption (A3) then one can simply let γ i (s) = (/g i ( s))s, for all s [, s] for some arbitrary s >. Remark 8. Computing the upper bound on V, (7), may be computationally intensive depending on the Lyapunov function and the dimension of the system. However, since the Lyapunov function is guaranteed to decrease even with no updates to w i

7 and T i, there is no restriction on the time needed to compute the upper bound on V and to update the parameters of the event-triggers. Offline computation and mapping different regions of state space with different values of V is also possible. On the other hand, it is true that the updates to all the event-triggers have to occur synchronously. Remark 9. Note that the aim of the event-triggers (22) is to approximately enforce the conditions x i,e w i (t) x. Thus whenever w i and T i are updated, the new parameters in the event-triggers are consistent with and are likely to be an improvement over the previous parameters. Although w i can be chosen to be non-decreasing in time, the same cannot be said about T i, at least not without some restriction on w j (t) for each j. From (7), (8) and (9), it is clear that M i (c), D(c) and D i (c) each capture very similar properties of the system - tolerance or sensitivity to the measurement errors in the whole set S(c). Since S(c ) S(c 2 ) if c c 2, the tolerance to measurement errors, M i (c), increases as c gets smaller. Alternatively, sensitivity to the measurement errors, D(c) and D i (c), decreases as c gets smaller. As a result, nondecreasing w i (t) intuitively suggests non-decreasing T i (t) and non-decreasing average inter-transmission times. However, this statement is far from apparent in the logical sense, mainly because of the asynchronous transmissions. Nonetheless we can still say something concrete in certain cases. Corollary 2 (Corollary to Theorem 3). For as long as D i (V s (t))w i (t) and D(V s (t))w i (t) are each non-increasing, T i (t) is non-decreasing. Proof: As V s (t) is a non-increasing signal, L(V s (t)), D(V s (t)), L i (V s (t)) and D i (V s (t)) are in turn non-increasing in time. In the definition of Γ i, (), it is seen that W i (t) appears in the terms D i (V s (t))w i (t) and D(V s (t))w i (t). In addition w i (t) M i (V s (t)) has been assumed to be nondecreasing. Thus, a simple application of the Comparison Lemma [25] implies that T i (t) as defined in Theorem 3 is non-decreasing. Given a nonlinear system, a systematic characterization of the region of state space where the conditions of Corollary 2 hold is an interesting problem and will be pursued in future. On the other hand, one may be able to choose w i (t) M i (V s (t)) for each i such that T i (t) are non-decreasing in time. The following result demonstrates the existence of w i (t) such that T i (t) is non-decreasing for each i, while also guaranteeing asymptotic stability. Corollary 3 (Corollary to Theorem 3). For each i, there exists a sequence of updates of w i (t V j ) such that T i(t) = Γ i (w i (t), W i (t), V s (t)) is non-decreasing in time. Proof: Since for each j, V(t V j+ ) < V(tV j ), a consequence of Remark 5 is that Γ i (w i (t V j ), W i (t V j ), V(t V j+)) Γ i (w i (t V j ), W i (t V j ), V(t V j )) Thus for each i, letting w i (t) w i () M i (c) and consequently, W i (t) W i () the corollary is seen to be true. In general for nonlinear systems, Γ i (w i, W i, V(t V j+ )) would be strictly increasing with decreasing V(t V j+ ). Further, although the corollary has been proved using the trivial choice of w i (t) w i () and W i (t) W i (), in practice one could choose {w i (t V j+ )}n as w i (t V j+) [w i (t V j ), M i (t V j )], s.t. T i (t V j+) T i (t V j ), i Further, for each i, T i (t V j+ ) could be chosen to be as large as possible. Clearly, this is a multi-objective resource allocation problem and has to be studied rigorously. Further, it may also be possible to design a single-objective function based on the aim of overall reduction of inter-transmission times. Note that we have not really addressed mathematically the question of whether there is an improvement in the average inter-transmission times. Non-decreasing T i (t) do not necessarily on their own mean longer average inter-transmission times. In fact, comparing across different choices of the parameters of the control system is equivalent to comparing different closed loop systems. Although the asynchronous nature of the transmissions is a major impediment to the quantification of any improvement in the inter-transmission times, at a more fundamental level it is due to the lack of tools to compare the average inter-transmission times across different aperiodic sampled-data control systems. The development of such tools is a problem in its own right and not within the scope of this paper. IV. LINEAR TIME INVARIANT SYSTEMS Now, let us consider the special case of Linear Time Invariant (LTI) systems with quadratic Lyapunov functions. Thus, the system dynamics may be written as ẋ = Ax + Bu, x R n, u R m (23) u = K(x + x e ) (24) where A, B and K are matrices of appropriate dimensions. As in the general case, let us assume that for each i {, 2,..., n}, x i R is sensed by the i th sensor. Comparing with (23)-(24) we see that x i evolves as ẋ i = r i (A)x + r i (BK)(x + x e ) (25) where the notation r i (H) denotes the i th row of the matrix H. Also note that x e and x i,e are defined just as in Section II. Now, suppose the matrix (A + BK) is Hurwitz, which is equivalent to the following statement. (A3) Suppose that for any given symmetric positive definite matrix Q, there exists a symmetric positive definite matrix P such that P (A + BK) + (A + BK) T P = Q. Then, the following result is obtained as a special case of Theorem 2 and prescribes the constants T i and w i in the event triggers, (5), that guarantee global asymptotic stability of the origin while also explicitly enforcing a positive minimum inter-sample time. Theorem 4. Consider the closed loop system (23)-(24) and assume (A3) holds. Let Q be any symmetric positive definite

8 matrix and let Q m be the smallest eigenvalue of Q. For each i {, 2,..., n}, let θ i (, ) s.t. θ = θ i (26) w i = σθ iq m c i (2P BK) (27) where < σ < is a constant and c i (2P BK) is the i th column of the matrix (2P BK). Let T i be defined as T i = τ(w i, a, a, a 2 ) (28) where the function τ is given by () and a = r i (A + BK) + r i (BK) W i, j= a = A + BK + r i (BK) + BK W i, a 2 = BK where W i = wj 2 wi 2. Suppose the sensors asynchronously transmit the measured data at time instants determined by (5). Then, the origin is globally asymptotically stable and the inter-transmission times are explicitly enforced to have a positive lower threshold. In the context of the results for nonlinear systems in Section III, the reason we are able to achieve global asymptotic stability for LTI systems is because, the system dynamics, the functions γ i (.) are globally Lipschitz, thus giving us constants w i and T i that hold globally. In fact, for linear systems, something more is ensured - the proposed asynchronous eventtriggers guarantee a type of scale invariance. Scaling laws of inter-execution times for centralized synchronous event-triggering have been studied in [26]. In particular, Theorem 4.3 of [26], in the special case of linear systems, guarantees scale invariance of the inter-execution times determined by a centralized event-trigger x e = W x. The centralized and decentralized asynchronous event-triggers developed in this paper are under-approximations of this kind of central event-triggering. In the following, we show that the scale invariance is preserved in the asynchronous eventtriggers. As an aside, we would like to point out that the decentralized event-triggers proposed in [] [2] are not scale invariant. In order to precisely state the notion of scale invariance and to state the result the following notation is useful. Let x(t) and z(t) be two solutions to the system: (23)- (24) along with the event-triggers (5). Theorem 5. Consider the closed loop system (23)-(24) and assume (A3) holds. Let Q be any symmetric positive definite matrix and let Q m be the smallest eigenvalue of Q. For each i {, 2,..., n}, let θ i, w i and T i be defined as in (26), (27) and (28), respectively. Suppose the sensors asynchronously transmit the measured data at time instants determined by (5). Assuming b is any scalar constant, let [z() T, z s () T ] T = b[x() T, x s () T ] T R n R n be two initial conditions for the system. Further let t zi = txi for each i {,..., n}. Then, [z(t) T, z s (t) T ] T = b[x(t) T, x s (t) T ] T for all t and t xi j = t zi j for each i and j. Proof: First of all, let us introduce two strictly increasing sequences of time, {t zs j } and {txs j }, at which one or more components of z s and x s are updated, respectively. Further, without loss of generality, assume t zs = t xs. The proof proceeds by mathematical induction. Let us suppose that t zs j = t xs j = t j for each j {,..., k} and that [z(t) T, z s (t) T ] T = b[x(t) T, x s (t) T ] T for all t [, t k ). Then, letting t k+ = min{t zs k+, txs k+ } the solution, z, in the time interval [t k, t k+ ) satisfies t z(t) = e A(t tk) z(t k ) + e A(t σ) BKz s (t k )dσ t k Hence, = be A(t t k) x(t k ) + b t t k e A(t σ) BKx s (t k )dσ z(t) = bx(t), t [t k, t k+ ) (29) Further, in the time interval [t k, t k+ ) z i,e (t) = z i (t k ) z i (t) = b(x i (t k ) x i (t)) = bx i,e (t) (3) Similarly, for all t [t k, t k+ ), z i,e (t) z(t) = x i,e(t) x(t) (3) Without loss of generality, assume z i,s is updated at t k+. Then, clearly, at least T i amount of time has elapsed since z i,s was last updated. Next, by the assumption that t zi = txi and the induction statement, it is clear that at least T i amount of time has elapsed since x i,s also was last updated. Further, it also means that z i,s (t k ) z i (t k+ ) w i z i (t k+ ). Then, (29)-(3) imply that x i,s (t k ) x i (t k+ ) w i x i (t k+ ), meaning t k+ = t zs k+ = txs k+ = t k+. Arguments analogous to the preceding also hold for multiple z i,s updated at t k+ instead of one or even x i,s instead of z i,s. Since the induction statement is true for k =, we conclude that the statement of theorem is true. Remark. From the proof of Theorem 5, (3) specifically, it is clear that the centralized asynchronous event-triggers (3) also guarantee scale invariance. Remark. Scale invariance, as described in Theorem 5, means that the inter-transmission times over an arbitrary length of time is independent of the scale (or the magnitude) of the initial condition of the system. Similarly for any given scalar, < δ <, the time and the number of transmissions it takes for x(t) to reduce to δ x() is independent of x(). So, the advantage is that the average network usage remains the same over large portions of the state space. V. SIMULATION RESULTS In this section, the proposed decentralized asynchronous event-triggered sensing mechanism is illustrated with two examples. The first is a linear system and the second a nonlinear system.

9 A. Linear System Example We first present the mechanism for a linearized model of a batch reactor, [27]. The plant and the controller are given by (23)-(24) with.38.2 6.7 5.67 A =.58 4.29.67.6 4.27 6.65 5.89.4 4.27.34 2. B = 5.67.3 3.4.3 K = [.6.2469.952.2447.499.966.39.823 which places the eigenvalues of the matrix (A+BK) at around { 2.98 +.9i, 2.98.9i, 3.89, 3.62}. The matrix Q was chosen as the identity matrix. The system matrices and Q have been chosen to be the same as in []. Lastly, the controller parameters were chosen as [θ, θ 2, θ 3, θ 4 ] = [.6,.7,.8,.5] and σ =.95. For the simulations presented here, the initial condition of the plant was selected as x() = [4, 7, 4, 3] T and the initial sampled data that the controller used was x s () = [4., 7.2, 4.5, 2] T. The zeroth sampling instant was chosen as t xi = T i for sensor i. This is to ensure sampling at t = if the local triggering condition was satisfied. Finally the simulation time was chosen as T sim = s. Figures a and b show the evolution of the Lyapunov function and its derivative along the trajectory of the closed loop system, respectively. Figure c shows the time evolution of the inter-transmission times for each sensor. The frequency distribution of the inter-transmission times is another useful metric to understand the closed loop event-triggered system. Thus, given a time interval of interest [, T INT ] consider N xi (T, T INT ) = { j N : t xi j+ [, T INT] and ] (t xi j+ txi j ) [, T ] } where N = {,, 2,...} is the set of natural numbers. Hence, the cumulative distribution of the inter-transmission times during [, T INT ] is given as D xi (T, T INT ) = #N xi (T, T INT ) #N xi (T INT, T INT ) (32) where # denotes the cardinality of a set. Figure d shows the cumulative frequency distribution of the inter-transmission times, D xi (T, T sim ), for each sensor. The cumulative frequency distribution of the inter-transmission times is a measure of the performance of the event-triggers. A distribution that rises sharply to % indicates that event-trigger is not much better than a time-trigger. Thus, slower the rise of the cumulative distribution curves, greater is the justification for using the event-trigger instead of a time-trigger. Note that an implication of the result on scale invariance, Theorem 5, is that the cumulative frequency distribution, such as in Figure d depends only on the phase of the initial condition and not on its magnitude. The minimum thresholds for the inter-transmission times T i for the example can be computed as in (28) and have been obtained as [T, T 2, T 3, T 4 ] = [, 5.4, 2.6, 9.9]ms, which are also the minimum inter-transmission times in the simulations presented here. These numbers are a few orders of magnitude higher and an order higher than the guaranteed minimum inter-transmission times and the observed minimum inter-transmission times in [], []. The average intertransmission times obtained in the presented simulations were [ T, T 2, T 3, T 4 ] = [24.9, 27.7, 34.5, 34.2]ms, which are about an order of magnitude lower than those reported in [], []. A possible explanation for this phenomenon is that in [], [], the average inter-transmission times depends quite critically on the evolution of the threshold η. Although the controller gain matrix K and the matrix Q have been chosen to be the same, by inspection of the plots in [], [], it appears that the rate of decay of the Lyapunov function V is roughly about half of that in our simulations. However, we would like to point out that our average inter-transmission times are of the same order as in [2] by the same authors. In any case, for LTI systems, our proposed method does not require communication from the controller to sensors to achieve global asymptotic stability. Lastly, as a measure of the usefulness of the eventtriggering mechanism compared to a purely time-triggered mechanism, T i / T i was computed for each i and were obtained as [T / T, T 2 / T 2, T 3 / T 3, T 4 / T ] = [.44,.55,.36,.58]. The lower these numbers are, the better it is. B. Nonlinear System Example The general result for nonlinear systems is illustrated through simulations of the following second order nonlinear system. [ ] [ ] f (x, x ẋ = f(x, x e ) = e ) = Ax + f 2 (x, x e ) x 3 + Bu (33) [ ] [ where A =, B = ] where x = [x, x 2 ] T is a vector in R 2 and the sampled data controller (in terms of the measurement error) is given as u = k(x + x e ) = K(x + x e ) (x + x,e ) 3 (34) where K = [k, k 2 ] is a 2 row vector such that Ā = (A + BK) is Hurwitz. Then, the closed-loop system with eventtriggered control can be written as [ ẋ = Āx + BKx e + [ ] = Āx + h + h 2 x 3 (x + x,e ) 3 where ) h = (x 3,e + 3x x 2,e + (3x 2 k )x,e ] (35) (36) h 2 = k 2 x 2,e (37) Now, consider the quadratic Lyapunov function V = x T P x where P is a symmetric positive definite matrix that satisfies

V V 25 2 5 5 2 3 4 5 6 7 8 Inter sample times (seconds).5.45.4.35.3.25.2.5..5 Sensor Sensor 2 Sensor 3 Sensor 4 Cum. freq. dist. (percentage) 9 8 7 6 5 4 3 2 Sensor Sensor 2 Sensor 3 Sensor 4 2 4 6 8 (a) 9 2 4 6 8 (b) 2 4 6 8 (c) 2 4 6 8 Inter sample times (milli seconds) Fig. : Batch reactor example: evolution of the (a) Lyapunov function, (b) time derivative of Lyapunov function, along the trajectories of the closed loop system. (c) Sensor inter-transmission times (d) cumulative frequency distribution of the sensor inter-transmission times, D xi (T, T sim ), where T sim = s is the simulation time. (d) the Lyapunov equation P Ā+ĀT P = Q, with Q a symmetric positive definite matrix. Let p m and p M be the smallest and largest eigenvalues of the matrix P. Since P is a symmetric positive definite matrix, p m and p M are each positive real numbers. Further, α ( x ) p m x 2 V (x) p M x 2 α 2 ( x ), x R 2 The time derivative of V along the trajectories of the closed loop system (35) can be shown to satisfy V = x T Qx + 2x T P B(h + h 2 ) ( σ)q m x 2 + x ( 2P B(h + h 2 ) σq m x ) where Q m is the smallest eigenvalue of the symmetric positive definite matrix Q and σ is a parameter satisfying < σ <. Suppose that the desired region of attraction be S(c), for some non-negative c (see (5) for the definition of S(c)). Let µ be the maximum value of x on the sub-level set S(c). Then, we let h c = x,e 3 + 3µ x,e 2 + max x µ {3x 2 k } x,e γ ( x,e ) 2P B hc σθ Q m, γ 2 ( x 2,e ) 2P Bk 2 x 2,e σθ 2 Q m where θ and θ 2 are positive constants such that θ + θ 2 =. It is clear that Assumption (A) is satisfied and we have V ( σ)q m x 2, if γ i x i,e x, i {, 2} Now, µ α (c) = c/p m is the maximum value of x on the set S(c). Hence, M (c) in (7) has to be defined for the set on which x i,e R γ (µ). Thus, we have that 2P B ( ) = R 2 + 3µ R + max {3x 2 k } σθ Q m x µ M (c) M 2 (c) = 2P Bk 2 (38) σθ 2 Q m Now, only T i for each i needs to be determined. To this end, the closed loop system dynamics (35) are bounded as in (8) and (9). f (x, x e ) L x + D x e f 2 (x, x e ) L 2 x + D 2,µ x e, x s.t. x µ Comparing with (35) the following can be arrived at. L = r (Ā), D =, L 2 = r 2 (Ā) ( ) 2 D 2,µ = R 2 + 3µ R + max {3x 2 k } + k 2 2 x µ In the example simulation results presented here, the following gains and parameters were used. K = [ 5 3 ] [ ], Q =, θ =.9, θ 2 =. σ =.9, c =, µ = µ x() = [2.8, 2.6] T, x s () = [2.9, 2.7] T (39) Notice that M 2 (c) is a constant independent of c. That is why θ 2 has been chosen much smaller than θ. The parameter µ has been chosen to be equal to µ. To be consistent with asynchronous transmissions, the initial value of x s () has been chosen to be different from x(). The simulation time was chosen as T sim = s. For the chosen parameters and the initial conditions, the initial value of the Lyapunov function is V () = 8.574. Thus the initial state of the system is well within the region of attraction, given by S(c) = S(). The event-trigger parameters were obtained as [w, w 2 ] = [.45,.832] and [T, T 2 ] = [4, 3.4]ms, which were also the minimum intertransmission times. The average inter-transmission times of the sensors for the duration of the simulated time were obtained as [ T, T 2 ] = [4.2, 26.2]ms. Thus for sensor, the average inter-transmission interval is only marginally better than the minimum. The number of transmissions by sensors and 2 were 2366 and 382, respectively. Figures 2a and 2b show the evolution of the Lyapunov function and its derivative along the trajectories of the closed loop system, respectively. Figures 2c and 2d show the intertransmission times and the cumulative frequency distribution of the inter-transmission times, D xi (T, T sim ), for each of the sensor. The sharp rise of the cumulative distribution curve for Sensor clearly indicates that the event-triggered transmission is nearly equivalent to time-triggered transmission. On the other hand, the slow rise of the cumulative distribution curve of Sensor 2 demonstrates the usefulness of event-triggering in its case.