Chapter 2 Event-Triggered Sampling

Similar documents
Comparison of Riemann and Lebesgue Sampling for First Order Stochastic Systems

Optimal Stopping for Event-triggered sensing and actuation

Comparison of Periodic and Event Based Sampling for First-Order Stochastic Systems

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

Level-triggered Control of a Scalar Linear System

Optimal Event-triggered Control under Costly Observations

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Comparison of Periodic and Event-Based Sampling for Linear State Estimation

WeA9.3. Proceedings of the European Control Conference 2009 Budapest, Hungary, August 23 26, 2009

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Noncausal Optimal Tracking of Linear Switched Systems

Numerical Integration of SDEs: A Short Tutorial

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Optimal Polynomial Control for Discrete-Time Systems

Maximum Process Problems in Optimal Control Theory

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Solution of Stochastic Optimal Control Problems and Financial Applications

Time and Event-based Sensor Scheduling for Networks with Limited Communication Resources

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

STAT 331. Martingale Central Limit Theorem and Related Results

Lecture 5 Linear Quadratic Stochastic Control

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Discrete-Time H Gaussian Filter

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

Point Process Control

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Uncertainty quantification and systemic risk

Random Fatigue. ! Module 3. ! Lecture 23 :Random Vibrations & Failure Analysis

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

Continuum Limit of Forward Kolmogorov Equation Friday, March 06, :04 PM

Topics in fractional Brownian motion

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

arxiv: v2 [math.oc] 20 Jul 2011

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Reflected Brownian Motion

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Properties of an infinite dimensional EDS system : the Muller s ratchet

On Finding Optimal Policies for Markovian Decision Processes Using Simulation

LogFeller et Ray Knight

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

n E(X t T n = lim X s Tn = X s

Diploma Thesis Networked Control Systems: A Stochastic Approach

Gaussian, Markov and stationary processes

Positive Markov Jump Linear Systems (PMJLS) with applications

OPTIMAL SAMPLING AND SWITCHING POLICIES FOR LINEAR STOCHASTIC SYSTEMS KAMIL NAR THESIS

McGill University Department of Electrical and Computer Engineering

On Stopping Times and Impulse Control with Constraint

An Event-Triggered Consensus Control with Sampled-Data Mechanism for Multi-agent Systems

Properties of the Autocorrelation Function

Discrete planning (an introduction)

An Uncertain Control Model with Application to. Production-Inventory System

Discrete-Time Finite-Horizon Optimal ALM Problem with Regime-Switching for DB Pension Plan

Lecture 12. F o s, (1.1) F t := s>t

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

A Barrier Version of the Russian Option

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Stochastic differential equation models in biology Susanne Ditlevsen

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Stochastic Differential Equations.

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Stochastic relations of random variables and processes

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

A numerical method for solving uncertain differential equations

Information and Credit Risk

A Class of Fractional Stochastic Differential Equations

Squared Bessel Process with Delay

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

The Pedestrian s Guide to Local Time

Stochastic Nicholson s blowflies delay differential equation with regime switching

A Concise Course on Stochastic Partial Differential Equations

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

Convergence at first and second order of some approximations of stochastic integrals

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

Stability of Stochastic Differential Equations

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Rotational Motion Control Design for Cart-Pendulum System with Lebesgue Sampling

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Some Tools From Stochastic Analysis

Exercises. T 2T. e ita φ(t)dt.

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

QUALIFYING EXAM IN SYSTEMS ENGINEERING

Math 212-Lecture 8. The chain rule with one independent variable

arxiv: v1 [math.pr] 18 Oct 2016

Constrained interpolation-based control for polytopic uncertain systems

Riccati difference equations to non linear extended Kalman filter constraints

ELEMENTS OF PROBABILITY THEORY

Event-triggered Control for Discrete-Time Systems

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

2001, Dennis Bricker Dept of Industrial Engineering The University of Iowa. DP: Producing 2 items page 1

1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is

Transcription:

Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential equation. The goal is to provide the readers with some general understanding and impression on how event-triggered sampling leads to better control performance and how sampling schemes can be designed in some optimal sense..1 Periodic and Event-Based Sampling Consider the following continuous-time scalar stochastic system dx = udt + dv, (.1) where x( ) is the state satisfying x() =, v( ) is a Wiener process with unit incremental variance and u( ) is the control input. In this chapter, two different and related problems are considered for this system, with the first focusing on the comparison between periodic and event-triggered sampling, and the second focusing on comparing optimal deterministic control with event-triggered control. Advantages of event-triggered sampling (in terms of smaller state variance) and event-triggered control (in terms of smaller quadratic state cost) will be demonstrated. In this section, the goal of controlling the system in Eq. (.1) is to make sure that its state x(t) remains close to the origin. Conventional periodic sampling and event-triggered sampling (which is also known as Lebesgue sampling) are compared to study the distribution and variance of x(t). Springer International Publishing Switzerland 16 D.Shietal.,Event-Based State Estimation, Studies in Systems, Decision and Control 41, DOI 1.17/978--19-666-_

4 Event-Triggered Sampling.1.1 Periodic Sampling For periodic sampling, we assume the sampling period to be h, and assume that the controller adopts a zero-order-hold strategy, that is, u(t) is a constant during every sampling period. It is well known that the state variance is minimized by a minimum variance controller, and the resulting sampled system in discrete time becomes x(t + h) = x(t) + hu(t) + e(t), (.) where e(t) is governed by the underlying Brownian motion. The corresponding mean variance over one sampling period is given by h V = 1 Ex (t)dt h = 1 h J e(h) + 1 h E ( x Q 1 (h)x + x Q 1 (h)u + u Q (h)u ) = 1 h [R 1(h)S(h) + J e (h), where Q 1 (h) = h, Q 1 (h) = h /, Q (h) = h /, R 1 (h) = h, J e (h) = h /, x is the initial condition at the beginning of one sampling period, u is the corresponding constant control input, and S(h) can be obtained by solving the following Riccati equation S = Φ SΦ + Q 1 L RL, L = R 1 (Γ SΦ + Q 1 ), R = Q + Γ SΓ with Φ = 1 and Γ = h. We then have S(h) = h/6 and the optimal control law u = Lx = 1 h + + x. Generally speaking, the calculations above are performed by solving an optimal control problem for the discrete-time system in (.) similar to a discrete-time linear quadratic Gaussian regulation problem, and the optimal control u is obtained by solving the corresponding Riccati equation. Also, notice that the objective function at each sampling period actually depends on the initial condition of x at the beginning of each sampling period. Since we are solving the optimal control problem backwardly in a recursive fashion, the sum of all state variances during each sampling period will only depend on the initial state of x. When the time horizon is very large, the term involving the initial condition of x disappears by averaging. Based on the above calculations, the associated state variance is given by

.1 Periodic and Event-Based Sampling 5 V R = + h. (.) 6 From Eq. (.), we observe that the state variance depends on the sampling period h. The intuition here is that the faster we sample and control the system, the smaller the state variance would become..1. Event-Based Sampling Next we introduce event-based sampling, in which case we only sample the system when x(t) is out of [ d, d. In other words, the control input u is only applied when x(t) =d. We call the condition x(t) =d an event and when this event happens, an impulse control is applied such that the system state is reset to zero. With this control strategy, the closed-loop system becomes a Markovian diffusion process, which is well studied in the literature (see Feller (1954)). Denote T ±d as the exit time of the process, which is the first time that the system state x(t) reaches the boundary x(t) =d starting from the origin (namely, x() = ). Since t x(t) is a martingale between two impulses (Feller 1954), we have E(t x(t) ) =. In this way, the mean exit time can be obtained as h L = E(T ±d ) = E ( x(t ±d ) ) = d. The average sampling period thus equals to h L = d. To calculate the steady-state variance of the state, we notice that the stationary probability distribution of x is given by the stationary solution to the Kolmogorov forward equation for the Markovian process (Feller 1954): 1 f x (x) 1 f x (d)δ x + 1 f x ( d)δ x = (.4) with the boundary conditions f ( d) = and f (d) =. This differential equation is known to have the following explicit solution f (x) = (d x )/d. From the above expression, the distribution of x is symmetric and triangular in the interval [ d, d, and thus the variance can be easily calculated as V L = d 6 = h L 6.

6 Event-Triggered Sampling.1. Comparison Assuming that the periodic sampling and event-based sampling have the same average sampling rate, namely, h L = h, then we have V R V L = + = 4.7. In other words, we must sample 4.7 times faster using periodic sampling in order to achieve the same state variance under event-based sampling. The above comparison assumes that different control laws are applied for different sampling strategies, that is, minimum variance control for periodic sampling but impulse control for event-based sampling. Now let us take a look when the periodic sampling also uses impulse control. In this case, the resulting process is easily seen to be a Wiener process and the average variance is given by E(x(t) ) = 1 h h E e (t)dt = 1 h h tdt = h. Thus, comparing with periodic sampling with impulse control at the same average sampling rate, we have V R V L =, which implies that with impulse control (rather than the minimum variance control for zero-order hold conversion), we only need to sample times (rather than 4.7 times) faster using periodic sampling to achieve the same control performance in terms of state variance than event-based sampling and control. In summary, the basic observation in this section is that compared with the classic periodic sampling, it is possible to maintain the control performance using event-based sampling at much reduced sampling or communication cost, or to achieve much improved performance using event-based sampling at the same sampling or communication cost.. Optimal Stopping Approach to Event-Triggered Sampling In this section, we consider a different problem setup for the system in (.1). Over a finite time horizon [, T, suppose that we use a zero-order hold control strategy, i.e., u(t) remains constant once it is applied. We can change the value of u only once during the horizon [, T, and an example of this control strategy is shown in Fig..1. To be specific, let [, T be the switching time, the control law is defined as:

. Optimal Stopping Approach to Event-Triggered Sampling 7 Fig..1 A realization of x(t) under a switching control u(t) x(t) T u(t) u(t) = { u, if t <, u 1, if t T. The problem to be considered here is listed below: Problem.1 How to determine the time of switching the control u such that the following quadratic objective function is minimized? J = E x (s)ds Obviously, the optimal solution to the above problem depends on the set of admissible control policies considered. In the following, we will discuss two types of control policies to control the system in (.1). The first class of policies is deterministic switching, which changes the value of u at a precalculated time instant. The second class of policies is event-triggered switching, in which case the value of u is changed when certain event happens. Comparison of the optimal decision rules for these two classes of policies will be presented...1 Choice of Terminal Control The cost function J can be decomposed as follows: J = E [ = E x (s)ds x (s)ds + E x (s)ds.

8 Event-Triggered Sampling The second term of the above equation represents the terminal cost accumulated after the control u is switched from u to u 1 at t =.Letδ := T and let L(x(), u 1,δ) be the conditional terminal cost, namely, L(x(), u 1,δ)= E x (s)ds x(), u 1,δ. Then [ E[L(x(), u 1,δ)=E E x (s)ds x(), u 1,δ [ = E x() δ + u δ 1 + x()u 1δ + δ ( ) = E 1 δu 1 4 x() δ + δ + x() + δ. From the above expression, one immediately observes that the optimal choice of terminal control u 1 is the following linear feedback law u 1 = x() (T ). (.5) Note that the optimal control law u 1 also depends on the stopping time, thus in principle, these two variables should be jointly designed and optimized to obtain the best performance. From the expected terminal cost, we can now write the original cost J as [ [ 1 J = E x(s) ds + E 4 x() (T ) + (T ). (.6).. Optimal Deterministic Switching We now consider deterministic switching at a known time θ [, T, in which case the switching time does not depend on the x-process. We will compare the performance of the optimal event-based switching with this optimal deterministic switching later. Since both u and θ are not random, the cost function J can be written as J = u θ + θ + (u θ + θ) T θ + 4 (T θ).

. Optimal Stopping Approach to Event-Triggered Sampling 9 As θ, T θ and u are all non-negative, for any choice of θ, the optimal u is u =. Therefore J = θ + (T θ) θ(t θ) +, 4 from which the optimal deterministic switch time θ is given by θ = T corresponding minimum cost J(θ ) equals and the J(θ ) =.5T. (.7) 8.. Optimal Event-Based Switching We now compute the optimal choice of (u,) which involves an event-triggering condition for selecting. LetI 1 and I be defined as I 1 = T x(s) ds, I = T x(s) ds. Then E (I 1 ) = E (x + u s + B s ) ds = x T + x u T + u T = u T + T, + T where B s denotes the Brownian motion at time s. ForE(I ),wehave E (I ) = E [ = E E = E x(s) ds x(s) ds, x(), u E [ x(s) ds, x(), u = E E [ (x() + u (s )+ B s B ) ds, x(), u = E [x() (T )+ x()u (T ) + u (T ) + (T ).

Event-Triggered Sampling From (.6), we further obtain [ 1 J = E(I 1 ) E(I ) + E = u T = u T (T ) 4 x() (T )+ + T [ E 4 x() (T )+ x()u (T ) + u (T ) ( + T E x() ) + u (T ) (T ). The optimal choice of u turns out to be still zero just as in the deterministic switching case, which is quite intuitive as the standard Brownian motion process is a martingale with zero mean and using a nonzero control u would steer the mean away from the origin. We omit the derivations of this fact, and refer the interested readers to Rabi et al. (8) for the details. Since u =, we can write the cost J simply as J = T 4 E[x() (T ). Therefore the remaining problem is to seek an optimal stopping time such that E[x() (T ) is maximized. This problem can be solved explicitly using any standard method of optimal stopping. The resulting optimal stopping policy is given by the symmetric quadratic envelope: { = inf t x(t) } (T t). (.8) Applying the above optimal stopping policy, the expected control performance J is given explicitly as J( ) = T 8. (.9) Comparing with J(θ ) in (.7), the cost of using event-triggered switching is only 4 % of that achieved using deterministic switching.. Summary In the previous two sections, we have seen the advantages brought by event-triggered sampling and control when compared with the conventional periodic sampling and deterministic control. The developed results, given in terms of closed-form and analytic solutions, are normally limited to low-order systems. For general higher-order

. Summary 1 systems, event-triggered sampling and control in principle outperform their periodic and deterministic counterparts, but it is normally difficult to provide an analytic analysis as in the scalar first-order case..4 Notes and References The results of comparing the performance of periodic and event-based sampling for control of a first-order stochastic system were developed by Åström and Bernhardsson (1999), the pioneers on event-triggered systems. These results were extended to the second-order case in Meng and Chen (1). The counterpart on state estimation for first- and second-order systems was developed in Wang and Fu (14). The optimal stopping approach to event-triggered sampling was originally introduced in Rabi et al. (6, 8), and an enriched version of the results was provided in Rabi et al. (1). A closely-related problem to event-based sampling is optimal paging and registration in cellular networks, for which some interesting results were developed by Hajek et al. (8). In general, optimal event-triggered sampling problems are mathematically difficult to solve. Since the book is mainly focused on state estimator design, the interested readers are referred to the papers of Åström and Bernhardsson (1999), Meng and Chen (1), Wang and Fu (14), Rabi et al. (6), Rabi et al. (8), Rabi et al. (1), Hajek et al. (8) and the references therein for the detailed technical difficulties and obtained results, although it is worth noticing that the development on event-based sampling forms an important part in the theory of event-based systems. References Åström K, Bernhardsson B (1999) Comparison of periodic and event based sampling for first-order stochastic systems. In: Preprints 14th world congress of IFAC Feller W (1954) Diffusion processes in one dimension. Trans Am Math Soc 55:1 1 Hajek B, Mitzel K, Yang S (8) Paging and registration in cellular networks: jointly optimal policies and an iterative algorithm. IEEE Trans Inf Theory 54():68 6 Meng X, Chen T (1) Optimal sampling and performance comparison of periodic and event based impulse control. IEEE Trans Autom Control 57(1):5 59 Rabi M, Moustakides G, Baras J (6) Multiple sampling for estimation on a finite horizon. In: Proceedings of the 45th IEEE conference on decision and control, pp 151 157 Rabi M, Johansson K, Johansson M (8) Optimal stopping for event-triggered sensing and actuation. In: Proceedings of the IEEE conference on decision and control, pp 67 61 Rabi M, Moustakides G, Baras J (1) Adaptive sampling for linear state estimation. SIAM J Control Optim 5():67 7 Wang B, Fu M (14) Comparison of periodic and event-based sampling for linear state estimation. In: Proceedings of IFAC world congress

http://www.springer.com/978--19-664-6