Diploma Thesis Networked Control Systems: A Stochastic Approach

Size: px
Start display at page:

Download "Diploma Thesis Networked Control Systems: A Stochastic Approach"

Transcription

1 UNIVERSITY OF PATRAS SCHOOL OF ENGINEERING DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING Diploma Thesis Networked Control Systems: A Stochastic Approach Panteleimon M. Loupos Advisor Prof. George V. Moustakides Patras, June 2011

2

3 Óôïí ÐáôÝñá

4

5 Acknowledgements It is dicult not to overstate my deep sense of gratitude to my advisor Professor George V. Moustakides for his enormous support and constant guidance during my last two years of undergraduate studies. Above all, I would like to thank him for inspiring me love for Science. ÄÜóêáëå, åõ áñéóôþ ãéá üëá. I am grateful to my uncle Professor Nicos Christodoulakis and to Professor George Papavasilopoulos, for their unfailing love and unconditional encouragement to pursue my dreams. I am also greatly indebted to my teachers Tassos Bountis and George Bitsoris for helping me mould my personality and instilling me with values and ideals forgotten in our day and age. They are real pedagogues. Last but not least, I would like to express my appreciation to Professor Dimitris Toumpakaris for our fruitful discussions and rewarding interaction, and to my friend Panagiotis Niavis for his ingenious lessons on programming and his witty sense of humor.

6

7 i Contents List of Figures iii 1 Introduction General about Networked Control Systems Evolution of Control Theory Fundamental Issues in NCS This Dissertation Event Triggered Sampling Lebesgue Sampling Lebesgue Integral Comparison of Riemann and Lebesgue Sampling Level Crossing Sampling Optimal Stopping Times Backward Induction Method Optimal Estimation with Limited Measurements over a Finite Time Horizon Conclusions and Future Research 39

8 ii Bibliography 41

9 iii List of Figures 1.1 Encoder blocks map measurements into streams of symbols that can be transmitted across the network. Encoders serve two purposes: they decide when to sample a continuous-time signal for transmission, and what to send through the network. Conversely, decoder blocks perform the task of mapping the streams of symbols received from the network into continuous actuation signals. (reproduced from [3]) Direct and Hierarchical Structure of an NCS. (reproduced from [12]) Evolution of Control Theory. (reproduced from [2]) Actuator and Sensor Delays in an NCS. (reproduced from [10]) Riemann versus Lebesgue Integral. (reproduced from Britannica) Integrator with Riemann (blue) and Lebesgue sampling (red) Comparison of V L and V R in a rst order stochastic system. (reproduced from [1]) LSC scheme for d = 1. (reproduced from [5]) Gaussian sampling thresholds for 2 = Numerical Computations for 30, 50 and 100 samples V n and C n for N = 10 and a = 0: Sampling thresholds of the AR(1) for dierent values of a

10 iv

11 1 Chapter 1 Introduction 1.1 General about Networked Control Systems Networked Control Systems (NCS) are spatially distributed systems comprising sensors, actuators and controllers whose operation is coordinated through a shared bandlimited digital communication network. However, one may see the notion of NCS from a more general perspective. For example, in the context of biology systems, the components might be identied as neurons, muscles, neural pathways and the cerebral cortex.thus, the universal feature of NCS is the spatial distribution of its components. The components may operate in an asynchronous manner, but they cooperate in order to achieve some overall objective. Figure 1.1 illustrates the general architecture of an NCS. Generally speaking, there are two major groups of systems in which networked control system conguration can be applied, namely complex control systems and remote control systems. A complex control system is a large-scale system containing several subsystems that collaborate together and share resources. In the past, it was challenging to install and maintain such systems, because direct electrical wiring was needed to connect their components. The advent of networked control system conguration has greatly reduced the

12 2 Figure 1.1: Encoder blocks map measurements into streams of symbols that can be transmitted across the network. Encoders serve two purposes: they decide when to sample a continuous-time signal for transmission, and what to send through the network. Conversely, decoder blocks perform the task of mapping the streams of symbols received from the network into continuous actuation signals. (reproduced from [3]) complexity of the connections, as it provides more exibility in installation, and eases maintenance and troubleshooting. On the other hand, the term remote control system refers to a system that is controlled by a controller located far away (also known as tele-operation control). Remote control systems have been used for two reasons, namely convenience and safety. A remote control system saves place-to-place traveling time of human operators and protects them from dangers in hazardous environments such as space or war zones. In the past, a remote control system typically required a specic connection link or medium, which was often limited to a point-to-point connection and had an expensive set-up cost. With the evolution of communication technologies, an emerging alternative to expand remote systems to comprise more connections is to utilize wireless data network resources by conguring the system as an NCS. Moreover, there are two general approaches to designing an NCS, as depicted in gure 1.2. Direct Structure: An NCS employing the direct structure approach is composed of a controller

13 3 Figure 1.2: Direct and Hierarchical Structure of an NCS. (reproduced from [12]) and a remote system containing a physical plant, and sensors and actuators attached to the plant. The control signal is encapsulated in a frame or a packet and is sent to the plant via the network. The plant then returns the system output to the controller by putting the sensor measurement into a frame or also a packet. In a practical system, multiple controllers can be implemented in a single hardware unit that manages multiple NCS loops in the direct structure. Hierarchical Structure: In this case, there are several subsystems forming a Hierarchical structure and a main controller. Each subsystem contains a sensor, an actuator, and its own controller. Periodically, the main controller computes and sends the reference signal to the system in a packet via a network. The remote system then processes the reference signal to perform local closed-loop control, and returns the sensor measurement to the main controller for networked closed-loop control. Note, however, that the aforementioned dierence between the two structures

14 4 has nothing to do with the network. That is, from the perspective of NCS, these two structures do not signicantly dier. 1.2 Evolution of Control Theory Control theory is an interdisciplinary branch of engineering and mathematics, that deals with the behavior of dynamical systems. Although modern control theory relies on mathematical models for its implementation, control systems of various types date back to antiquity; long before mathematical tools were available. The rst systematic approach in this eld was made by the physicist James Clerk Maxwell in 1868 in his publication entitled On Governors. The major breakthrough, however, was made by the work of Nyquist in 1932, as it provided general methods of design and analysis that could be applied to virtually any feedback system. During the World War II, the need of designing re-control systems, guidance systems and other military equipment gave a great impetus to the automatic control theory. In particular, the pioneering work of Nyquist, Bode, Nichols, and Evans laid a solid theoretical foundation for frequency domain methods. After 1950, with the advent of digital computers and microprocessor (in 1969) control design has been gradually shifting away from frequency-domain techniques, and digital control has become the de facto design method. After many years of research, one may be sure that the foundations of digital control is now rmly established. As mentioned before, control systems with spatially distributed components have been used in several applications, such as reneries, power plants, and airplanes. In these applications, the components of the systems were connected with hardwired connections. The high cost of wiring and the diculty in introducing new components, together with the availability of low-cost, low-power small embedded processors have raised the necessity of adopting a networked control system conguration; giving birth to a complete new direction of control theory, that of NCS. Thus, low-cost microprocessors can be installed at remote locations, and information can be transmitted

15 5 Figure 1.3: Evolution of Control Theory. (reproduced from [2]) reliably via shared digital networks and wireless connections. Although NCS have a great commercial impact in industrial implementations; mainly due to ad-hoc approaches, there is an increasing interest on applying NCS to a more general potential framework. Consequently, NCS have been nding application in a broad range of areas such as the automotive and aerospace industries, mobile sensor networks, remote surgery, automated highway systems, and unmanned aerial vehicles. 1.3 Fundamental Issues in NCS In this section, we briey discuss the most basic problems occurring in NCS (see [11] for more details). Network Induced Delay: Since NCS are composed by a large number of interconnected devices operating over a network, data transfer between the controller and the remote system will induce network delays. Network-induced delay can be constant, time varying, or random. Its delay characteristics depend on the medium access control (MAC) protocol of the control network, on the scheduling method used, and on other uncertain factors of the medium. It is responsible for degrading control system' s quality of performance (QoP), and can even cause destabilization of the system. More specic, network-induced delays are categorized based on the direction

16 6 of data transfer. Thus, we have the sensor-to-controller delay sc, and the controller-to-actuator delay ca as depicted in gure 1.4. Note that we can easily deal with the consequences of the delay sc by using time-stamps and state estimators. On the contrary, it is much more dicult to face the delay ca, since the controller has no information on when the computed control signal will arrive at the actuator. Figure 1.4: Actuator and Sensor Delays in an NCS. (reproduced from [10]) Finally, apart from the delays sc and ca, there are several other types of delays, most important of which is the network access delay. Network access delay is the time that a node has to wait, because of the competition, in order to get access to the network. Network-induced delay is one of the most important characteristics of NCS and thorough research has been conducted on this eld. Single-Packet versus Multiple-Packet Transmission: For a variety of reasons data in networks is transmitted in packets, which are sequences of bytes. There are two dierent situations in NCS, i.e. single or multiple packet transmission. In single-packet transmission sensor or actuator data are lumped together into one network packet and transmitted at the same time, whereas in multiple-packet transmission sensor or actuator data are trans-

17 7 mitted in separate network packets. Multiple-packet transmission is imposed by bandwidth and packet size constraints. Thus, large amounts of data must be separated into multiple packets to be transmitted. The other main reason is that sensors and actuators in an NCS are often distributed over a large physical area, and it is impossible to put the data into one network packet. The main disadvantage of this method is that the controller and actuator have to wait for the arrival of all the data packets, before they are able to calculate their actions. Data Packet Dropout: Data packet dropout often occurs while transmitting data among devices due to node failures or message collisions, and it is a potential source of instability and poor performance of an NCS. There are two dierent strategies for dealing with this problem, either to send the packet again or simply discard it. In communication networks, these two strategies are called Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) respectively. In NCS, UDP is used in most applications due to the real-time requirement and the robustness of control systems. That is for real-time feedback control data, it is better to discard the old untransmitted message and transmit a new packet instead, in order for the controller to receive fresh data for control calculation. In most cases, lower bounds of the packet transmission rate are computed, to determine the certain amount of data loss that the system can tolerate. Network Scheduling: The problem of network scheduling in NCS is to assign a transmission schedule to each transmission device (sensor, controller, actuator) based on a scheduling algorithm (a set of rules that determines the order in which messages are transmitted). The need of a scheduling algorithm is imposed by the limited bandwidth of the network, which creates a situation where all the subsystems can not access the network resource at the same time.

18 8 1.4 This Dissertation The aim of this diploma thesis is to discuss and present existing techniques that apply event-triggered sampling to linear system state estimation, and to evaluate how these methods aect the overall performance of the system. As already mentioned, sampling rate constraints arise in NCS due to the limited bandwidth available. This restriction on the number of samples clearly aects the mean square estimation distortion error, and the question is how we should choose the sampling instants in order to minimize it. That is, we want to { [ 1 min E N ]} N (x n ˆx n ) 2 : n=1 The thesis is organized in three chapters. The rst chapter constitutes a brief introduction to Networked Control Systems. The applications of NCS and some of the key design issues are introduced. This chapter also contains a brief history of Control Theory, which will hopefully elucidate how the necessity of NCS has emerged. In Chapter 2, which is the core of the thesis, event-triggered sampling strategies are presented. Level-triggered sampling (in particular Lebesgue sampling) and optimal sampling are discussed. Finally, the third chapter is dedicated to the conclusions of the thesis and to potential future research topics.

19 9 Chapter 2 Event Triggered Sampling Event-triggered sampling is a particular strategy of sampling in which the sampling instants are random variables. It exploits the idea that the system itself would decide when to sample according to the evolution of its dynamics. Thus, event-triggered sampling oers extreme versatility resulting in signicant performance gains in the state estimation problem. 2.1 Lebesgue Sampling In this section, we discuss the notion of a particular type of event-triggered sampling, that of Lebesgue sampling. We begin by briey introduce the Lebesgue integral, and we continue with the presentation of some basic results Lebesgue Integral Everybody is familiar with the notion of Riemann integral of a function f between limits a and b, which can be interpreted as the area under the graph of f. Although Riemann integration is well behaved for a large class of functions (Riemann

20 10 integrable functions), it does not interact well with taking limits of sequences of functions; making such limiting processes dicult to analyze. For this reason, French mathematician Henri Lebesgue proposed a dierent way of integration, the so called Lebesgue integral. Lebesgue integral is really an extension of the Riemann integral, in the sense that it allows for a larger class of functions to be integrable, and it does not succumb to the shortcomings of the latter. For example, the Dirichlet function, which is 0 where its argument is irrational and 1 otherwise, has a Lebesgue integral, but it does not have a Riemann integral. In what follows, the Lebesgue integral of an nonnegative function is dened, and nally the general Lebesgue integral is presented (for more details and analytic proofs see [9]). Let (E, X, ì) be a measure space where E is a set, X is a ó-algebra of subsets of E and ì is a (non-negative) measure on X of subsets of E. In Lebesgue's theory, integrals are dened for a class of functions called measurable functions. A function f is measurable if the pre-image of every closed interval is in X: f 1 ([a; b]) X for all a < b: We build up an integral f d = f (x) (dx) ; E E for measurable real-valued functions f dened on E. Indicator function: Suppose X is a set with typical element x, and let S be a subset of X. The indicator function of the subset S is a function 1 S : X [0; 1];

21 11 dened as 1 if x S 1 S (x) = 0 if x = S To assign a value to the integral of the indicator function 1 S consistent with the given measure ì we set: of a measurable set S 1 S d = (S): Simple function: A nite linear combination of indicator functions a k 1 Sk ; k where the coecients a k are real numbers and the sets S k are measurable, is called a measurable simple function. We extend the integral by linearity to non-negative measurable simple functions. When the coecients a k are non-negative, we set ( a k 1 Sk ) d = k a k 1 Sk d = k a k (S k ): k Even if a simple function can be written in many ways as a linear combination of indicator functions, the integral will always be the same. If B is a measurable subset of E and s a measurable simple function one denes s d = B 1 B s d = k a k (S k B):

22 12 Non-negative functions: Let f be a non-negative measurable function on E which we allow to attain the value. We dene E { } f d = sup s d : 0 s f; s simple : E We need to show this integral coincides with the preceding one, dened on the set of simple functions. When E is a segment [a, b], there is also the question of whether this corresponds in any way to a Riemann notion of integration. It is possible to prove that the answer to both questions is yes. We have dened the integral of f for any non-negative extended real-valued measurable function on E. Now, we are going to give the general Lebesgue integral. The General Lebesgue Integral: Let f + and f be the positive and the negative part of a function f respectively. That is: f + = max {f; 0} and f = max { f; 0} Note that f = f + + f and f = f + + f If f d < ; then f is called Lebesgue integrable. In this case, both integrals satisfy

23 13 f + d < ; f d < ; and it makes sense to dene f d = f + d f d: Intuitive interpretation: The main dierence between the Lebesgue and Riemann integrals is that the Lebesgue method takes into account the values of the function, subdividing its range instead of just subdividing the interval on which the function is dened, as demonstrated in the following gure. Getting the intuition is the key in understanding the notion of Lebesgue Sampling, and this is the reason why all the above were presented. Figure 2.1: Riemann versus Lebesgue Integral. (reproduced from Britannica) Comparison of Riemann and Lebesgue Sampling The traditional approach in designing control systems is to sample the signals periodically in time. Analogously to Riemann integration, we call this scheme Riemann sampling. There are several alternatives to Riemann sampling. The most common one is to sample the signal when it passes certain limits, namely Lebesgue Sampling. Because of its simplicity, Lebesgue sampling was very popular in early feedback systems,

24 14 and much work on these systems was done in the period However, the control analysis and design of such systems becomes very dicult and complicated. Moreover, no general theory has been developed, contrary to the well-established theory of time-driven sampled systems. Hence, the interest in this direction had faded away until the appearance of hybrid systems. Lately, with the new trend of NCS, Lebesgue sampling has regained popularity. K. J. Aström and B. M. Bernhardsson in their work [1] investigate the benets of Lebesgue sampling in the simple cases of an integrator and a rst order stochastic system. In what follows, we present the case of the integrator, and briey discuss the case of the rst order stochastic system Integrator dynamics The dynamics of an integrator are described by the equation: dx = udt + dv; where the disturbance v(t) is a Wiener process with unit incremental variance and u is the control signal. We want to control the system, in the sense that we wish to keep the state close to the origin. We compare traditional periodic sampling with Lebesque sampling where control actions are taken only when the output is outside an interval, i.e. d < x < d. Notice that in order for the comparison to be fair, we use impulse control for both schemes. Periodic sampling In the case of periodic sampling with period h, the sampled system is described by x(t + h) = x(t) + u(t) + w(t): The mean variance over one sampling period, when we use impulse control is a Wiener

25 15 process which is periodically reset to zero. It is V R = E [ x 2] = 1 [ h ] h E w 2 (t)dt = 1 h Lebesgue sampling 0 h 0 tdt = h 2 : In this case, impulse control actions are taken when x(t k ) = d, resulting in x(t + k ) = 0. Using this control law the closed loop system becomes a Markovian diusion process investigated in Feller (1954a). Let denote the stopping time, i.e the rst time when the process reaches the threshold d starting from the origin. Then the fact that t x 2 t martingale permits us to compute the mean stopping time. Thus h L E [] = E [ x 2 ] = d 2 : Hence, the average Lebesgue sampling period is h L = d 2 : between two impulses is a The stationary probability distribution of x is given by the stationary solution of the Kolmogorov forward equation, and is found to be f(x) = (d x ) d 2 ; which is symmetric and triangular since d x d: The steady state variance is V L = d d x 2 f(x)dx = d2 6 = h L 6 : In order to be able to compare the results obtained, we assume that average sampling rates are the same in both schemes, i.e. h L = h. Thus, we obtain that V R V L = h=2 h=6 = 3: The above formula means that we must sample 3 times faster with Riemann sampling to get the same mean error variance. gures, where we have selected d = 0:1 and w = d. The results are illustrated in the following

26 16 Figure 2.2: Integrator with Riemann (blue) and Lebesgue sampling (red). In the above simulations, the decrease in output variance using Lebesgue sampling is clearly demonstrated. Moreover, in the particular realization there are 83 and 69 control actions with Riemann and Lebesgue sampling respectively A First Order System In the case of a rst order stochastic system dx = axdt + udt + dv; it turns out that the value of a clearly aects the improvement of Lebesgue sampling.

27 17 Moreover, as shown in gure 2.3, the performance gain of Lebesgue sampling is larger for unstable systems and large sampling periods. This is explained by the fact that for unstable systems the variance V R increases much faster as the sampling period gets larger. Note that in this simulation, V R is the minimum variance obtained by solving a Riccati equation. Figure 2.3: Comparison of V L and V R in a rst order stochastic system. (reproduced from [1]) Remarks At this point, it is important to highlight some important dierences between Riemann and Lebesgue sampling. In the case of Riemann sampling, we know exactly all the sampling instants. On the contrary, in Lebesgue sampling if we are given a time instant t, we cannot draw any conclusion on its distance from an actual sampling instant (including whether it is a sampling instant). In Riemann sampling, we sample independently from the evolution of the state of the system. Thus, we have to minimize the variance for every time instant

28 18 1 h and then take their expectation, i.e. h 0 E[x2 (t)]dt. This means that during the rst time instants after the sampling point the performance is good, because control has been applied, but can degrade signicantly as the distance from the sampling point increases because no boundary is placed on the evolution of the state. However, in Lebesgue sampling we limit the evolution of the state into one region. Since we do not know whether a particular time instant is a sampling instant, every time instant is treated in the same way, resulting in the same distribution function for each t Level Crossing Sampling E. Kofman and J. H. Braslavsky in their work [5] extend the idea of Lebesgue sampling scheme proposed in Aström et al [1]. In particular, they introduced the use of level crossing sampling (LCS) scheme based on hysteretic quantization for feedback stabilization. That is, they divide the range space of the signal into quantization levels regularly spaced by d, and they allow the sampled signal to hold on the triggered value until a new sample is generated. Thus, LCS may be viewed as a Lebesgue sampling scheme in which the quantizer includes hysteresis. Let us now present the above idea in more detail. LCS Scheme Given a continuous function y(t) : R R and a quantization interval d, we dene the level-crossing sampled sequence {y s ( n )} n=0 by the piecewise constant function y s (t) : R R, y s (t) = { y(0 )=h h if 0 t < 1 y( n ) if n t < n+1 where denotes the integer part, and the sampling instants { n } n=0 are dened by

29 19 n = inf{t > n 1 : y(t) y( n 1 ) > d}: A visualization of LCS scheme is shown below. In the gure, the output LCS signal together with the input signal that generates it are plotted for a quantization interval d = 1. Figure 2.4: LSC scheme for d = 1. (reproduced from [5]) The main advantage of LCS is that it allows one bit coding representation. This is because successive samples always dier in ±d, so we can adopt the following coding strategy { 1 if ys ( n ) > y s ( n 1 ) 0 if y s ( n ) < y s ( n 1 ) If no samples are produced, no transmission takes place. Notice, however, that we still have information when no samples are produced; namely, the knowledge that the output y(t) remains within its quantization band.

30 Optimal Stopping Times The aim of the present section is to present basic results of the general theory of optimal stopping in the discrete time case. We begin the presentation by introducing the backward induction method and its mathematical formalism. We then use the method to solve an NCS problem Backward Induction Method We rst consider the martingale approach, which is then followed by the Markovian approach Martingale Approach We dene on a ltered probability space (Ω; F; (F n ) n 1 ; P ) the sequence of random variables G = (G n ) n 1. G n represents the gain obtained if we stop the observation of G at time n. Moreover, G is adapted to the ltration (F n ) n 1 meaning that each G n is F n -measurable. This means that we base all our decisions, in regard to optimal stop at time n, on the information available up to time n (no anticipation is allowed). Having said all that, we are now ready to give the denition of a stopping time. Denition: A random variable : Ω {1; 2; :::; } is called a Markov time if { n} F n for all n 1. A Markov time is called a stopping time if < P -a.s. The general optimal stopping problem seeks to solve: V = sup E [G ] ; where for the E[G ] to be well dened for all, we impose the condition: [ ] E sup n k N G k < :

31 21 The above problem involves two tasks, namely to compute the value function V as explicitly as possible and to nd the optimal stopping time at which the supremum is attained. Now, in the case of nite time horizon (N < ), we solve this problem using the method of backward induction. That is, we construct a sequence of random variables (V N n ) 1 n N and let the time go backward and proceed recursively as follows. Let's say that we are asleep and we wake up at time n = N. Then, then only option we have is to stop and our gain V N N equals G N. If now we wake up at n = N 1, we have two options, i.e. to stop or to continue. If we stop our gain V N N 1 will be equal to G N 1, and if we continue optimally our gain V N N 1 will be equal to E[V N N F N 1]. Notice that we must take the expected value of V N N because of the fact that our decision must be based on the information contained in F N 1 only. It follows that if G N 1 E[V N N F N 1] then we have to stop, and if G N 1 < E[V N N F N 1] then we have to continue. For n = N 2; :::; 1 the considerations are continued analogously. To sum up, we have the sequence of random variables (V N n ) 1 n N dened recursively as follows: V N n = G N for n = N and V N n = max { G n ; E [ V N n+1 F n ]} for n = N 1; :::; 1: Markovian Approach Now, we consider a time-homogeneous Markov chain X = (X n ) n 1 dened on a ltered probability space (Ω; F; (F n ) n 1 ; P x ), which takes values in a measurable space (E; B) where for simplicity we assume that E = R d for some d 1 and B = B(R d ) is the Borel -algebra on R d. Remember that a stochastic sequence X = (X n ; F n ) n 1 is called a time-homogeneous Markov chain (in a wide sense) if the random variables X n are F n =E -measurable

32 22 and the following Markov property (in a wide sense) holds: P (X n+1 B F n )(!) = P (X n+1 B X n )(!) P-a.s. for all n 1 and B B. The term time-homogeneous refers to the fact that P (x; B) does not depend on n. Note also that F n = F X n generated by the rst n observations. = (X 1 ; X 2 ; :::; X n ) is the -algebra It is assumed that the chain X starts at x under P x for x E, and that the mapping x P x (F ) is measurable for each F F. It follows that the mapping x E x [Z] is measurable for each random variable Z. Given now a measurable function G : E R satisfying the condition [ ] E x sup G(X n ) < : 1 n N for all x E, the nite horizon optimal stopping problem becomes: V N (x) = sup E x [G(X )] : 1 N To solve the problem, we set G n = G(X n ). This way, the solution of the problem reduces to the solution given in the martingale approach, where instead of P and E we have P x and E x for x E, exploiting here the Markovian structure of the problem Optimal Estimation with Limited Measurements over a Finite Time Horizon In this section, we consider a sequential estimation problem with limited measurements over a nite horizon (N) in the discrete time case. As stated in the introduction, state estimation under communication rate constraints is a fundamental issue for NCS. Specically, there are three types of communication rate constraints (as discussed in [7]): 1) Average rate limit: This is a `soft constraint', which sets an upper limit on the average number of transmissions; 2) Minimum waiting time between transmissions: Here, there is a mandatory minimum waiting time between two

33 23 successive transmissions from the same node; 3) Finite transmission budget : This is a `hard constraint', which allows a limited number of transmissions from the same node over a given time window. The simplest version of this type of constraint is to set the constraint's window to be the problem's entire time horizon. In this thesis, we adopt as our communication constraint the simple version of the nite transmission budget. That is, we allow the sensor to transmit exactly M (where M < N) samples of an underlying stochastic process to a supervisor over a noiseless and error-free channel. On the other side of the channel, the supervisor sequentially estimates the state of the process based on the causal sequence of samples it receives. The sensor and the supervisor have the common objective to minimize the average mean-square distortion error. Relationship to previous works In their work [4], Imer and Basar consider the same problem of nite horizon adaptive sampling in a discrete-time setting. Using the optimal form of the estimator under adaptive sampling (see the partial proof in proposition 1 of [4]), they derive dynamic programming equations to be satised by the optimal sampling policy. Moreover, since there is no known method to carry out this optimization problem over innite measurable sets on the real line, they solve it for the specic case of the sets being in the form of symmetric intervals. Rabi, Moustakides and Baras in [7] provide the solution of the problem in a continuoustime setting. They nd the optimal sampling times to be the rst hitting times of time-varying, double-sided and symmetric envelopes. When the signal follows a Brownian motion, they characterize the optimal sampling envelopes analytically. Moreover, in the case of the Ornstein-Uhlenbeck process, they provide a numerical procedure for computing these envelopes.

34 24 General Problem Statement Given a stochastic sequence {x n }, we want to: { [ 1 min E N ]} N (x n ˆx n ) 2 n=1 subject to the constraint of M channel uses, where M < N. Notice that this is a joint problem of nding the optimal sampling policy and the optimal estimator. We begin with the presentation of the optimal estimation policy and the optimal sampling policy for the case where the sequence {x n } is i.i.d. We derive and simulate the analytical expression for the sampling thresholds in the i.i.d. Gaussian case. Moreover, we propose a numerical approach for the computation of the sampling thresholds. We then consider the case where the sequence {x n } follows an Autoregressive model of order Optimal Estimator for the I.I.D. Case First case: Deterministic Sampling In this case, is known, which permits us to interchange the order of expectation and summation. Thus, we can write: [ N ] 1 E (x n ˆx n ) 2 = E [ (x n ˆx n ) 2] N + E [ (x n ˆx n ) 2] Let's examine the term: n=1 n=1 n=+1 I = E [ (x n ˆx n ) 2] = E [ xn] 2 2E [xnˆx n ] + E [ˆx ] [ 2 n = E x 2 n] 2ˆxn E [x n ] + ˆx 2 n To nd the optimal estimator, n = 0

35 25 0 2E [x n ] + 2ˆx n = 0 ˆx n = E [x n ] = : Hence, the optimal estimator in the deterministic case is the mean value of the stochastic process. Second case: Optimal Sampling The equation of the optimal estimator in this case diers, because the sampling instant is now a random variable. Thus, we cannot put the expectation into the sum as we did before. Hence, we adopt the following approach: [ N ] [ N ] [ N ] E (x n ˆx n ) 2 = E (x n ˆx n ) 2 1{ > n} + E (x n ˆx n ) 2 1{ < n} n=1 n=1 n=1 The use of the indicator function allows the limits of the summation to be deterministic, which in turn permits us to get the expectation into the sum. Letting the estimator to be a deterministic function of x 0, i.e. ˆx n = '( ; x 0 ), we can write for the rst term: I a = E [ (x n ˆx n ) 2 1{ > n} ] = E [ x 2 n1{ > n} ] 2ˆx n E [x n 1{ > n}]+ˆx 2 ne [1{ > n}] Solving again to nd the optimal estimator, we n = 0 0 2E [x n 1{ > n}] + 2ˆx n E [1{ > n}] = 0 ˆx = E [x n1{ > n}] E [1{ > n}] = E [x n1{ > n}] P ( > n) = E [x n > n] : The knowledge that > n makes the computation of the optimal estimator extremely dicult. What changes now is that at the time instants we do not sample, we still

36 26 have information, i.e. that our stochastic process is inside a region and has not crossed the sampling thresholds. Instead of solving E[x n > n], we rst considered the case of E[x n x 0 ]. Since the stochastic sequence is i.i.d., this ends up to be the mean value, i.e. E[x n x 0 ] =. Afterwards, we tried to solve E[x n > n] for a specic case, that of x n n, where for n we used the thresholds found in the case of E[x n x 0 ] =. It is: E [x n > n] = E [ ] x n x 1 1 ; x 2 2 ; :::; x n n = P (x < x n x + dx; x 1 1 ; x 2 2 ; :::; x n n ) P ( x 1 1 ; x 2 2 ; :::; x n n ) = P (x dx; x n n )P (x dx; x n 1 n 1 ) P (x dx; x 1 1 ) P ( x n n )P ( x n 1 n 1 ) P ( x 1 1 ) = P (x dx; x n n ) ; P ( x n n ) where in the third equality we used the fact that the samples are independent. Note that in the case of a symmetric pdf with mean value 0, the above expectation is equal to the mean value, as well. Actually, this is the case in our simulations, since we treat a zero mean i.i.d. Gaussian sequence Optimal Sampling Policy for the I.I.D. Case On a decision horizon of length N, we seek to nd the increasing and causal sequence { 1 ; 2 ; :::; M }, which minimizes the average mean-square distortion error. Since we have a nite horizon problem, we will use the method of backward induction discussed in section We start by examine the optimal choice of a single sampling instance 1, i.e. M = 1, where for simplicity we drop the subscript 1. This is due to the fact that knowing how to choose i+1 optimally, we can obtain an optimal choice for i by solving the same problem over the horizon of length i+1 1 this time.

37 27 Optimal Sampling for a Single Sample To nd the optimal sampling policy for a single sample we want to: { [ ]} 1 N min E (x n ˆx n ) 2 N n=1 Setting the estimator to be the mean value of the stochastic process, i.e. ˆx = = 0, we can write the above as (we drop the 1 since it is a constant): N { [ N ]} N min E x 2 n1{ > n} + x 2 n1{ < n} n=1 n=1 { max x 2 } : The above equation means that we should not transmit the most likely outcomes, and that the sample should be generated only when it contains suciently new information. We have the following backward recurrence: V 1 N(x N ) = x 2 N for n = N V 1 N 1(x N 1 ) = max { x 2 N 1; E [ V 1 N(x N ) ]} = max { x 2 N 1; 2} for n = N 1 V 1 N 2(x N 2 ) = max { x 2 N 2; E [ V 1 N 1(x N 1 ) ]} for n = N 2 resulting in the equation: V 1 n (x n ) = max { x 2 n; E [ V 1 n+1(x n+1 ) ]} for n = N 1; N 2; :::; 1: V 1 n (x n ) denotes the minimum distortion incurred by sampling at discrete time instants no less than n, where the superscript refers to the number of samples allowed. Notice that for every time n there exists a threshold, i.e. C n = E[V 1 n+1(x n+1 )], such that if x 2 n C n we sample, otherwise we go to the next time instant.

38 28 Multiple Samples Having found the solution for one sampling instant, we can now generalize for multiple samples. Let's rst take a look at the case of two samples. The minimal distortion of the two samples would be: V 2 N 1(x N 1 ) = x 2 N 1 + E [ V 1 N(x N ) ] = x 2 N for n = N 1 V 2 n (x n ) = max { x 2 n + E [ V 1 n+1(x n+1 ) ] ; E [ V 2 n+1(x n+1 ) ]} for n = N 2; :::; 1: This means that if we decide to sample at time instant n, in the remaining time for the end of the horizon the best we can do is E [ V 1 n+1(x n+1 ) ]. Thus, sampling occurs if the quantity x 2 n + E [ V 1 n+1(x n+1 ) ] is greater than the expected minimum distortion of the two samples. Otherwise, we continue to the next time instant. Generalizing the above, we get: V M+1 n (x n ) = max { x 2 n + E [ Vn+1(x M n+1 ) ] ; E [ Vn+1 M+1 (x n+1 ) ]} for n = N M; :::; 1: I.I.D. Gaussian We now derive and compute the integral equations of the sampling thresholds in the case of a zero mean i.i.d. Gaussian sequence with variance 2. It is: C N 1 = E[V 1 N(x N )] = E [ x 2 N C N 2 = E [ V 1 N 1(x N 1 ) ] = ] = 2 V 1 N 1(x N 1 )f(x N 2 )dx = max { x 2 N 1; 2} f(x N 2 )dx = x 2 N 1f(x N 2 )dx + 2 f(x N 2 )dx x 2 N 1 >2 x 2 N 1 <2 = 2 x 2 N 1f(x N 2 )dx + 2 f(x N 2 )dx x 2 N 1 > ( ) ) ( ) = (erfc erf 2 e 2

39 29 and C n = E [ Vn+1(x 1 n+1 ) ] = V 1 n+1(x n+1 )f(x n )dx = max { } x 2 n; C n+1 f(xn )dx = x 2 nf(x n )dx + C n+1 f(x n )dx x 2 n >C n+1 C n+1 > x 2 n C n+1 = 2 x 2 nf(x n )dx + C n+1 f(x n )dx x n>c n+1 C n+1 ) = 2 (erfc ( Cn Cn+1 2 ) C n+1 e C n+1 erf ( Cn ) where in the above integral calculations we used the transformation u = x= 2 2, dx = 2du and f(x) is the well known Gaussian pdf. Figure 2.5 depicts the Gaussian sampling thresholds derived from the analytic solution for N = 10 and N = 100, respectively. We see that the optimum policy is a timevarying threshold, which is decreasing in time. Figure 2.5: Gaussian sampling thresholds for 2 = 1.

40 Numerical Computation of the Sampling Thresholds Although we were able to compute the analytic solution of the sampling thresholds in the Gaussian case, this is not feasible in general. There are several distributions, for which the analytic computation of the integral equations is very dicult, even impossible. For this reason, we present here two dierent schemes for the numerical evaluation of the integral equations. Probability Density Function Approximation We need to compute the integral equation: C n = V 1 n+1(x n+1 )f(x n )dx: We can replace the pdf f(x) by a vector f = [f(x 0 ); f(x 1 ); :::; f(x L )]; where a = x 0 < x 1 < ::: < x L = b constitutes a sampling of the interval [a; b]: A similar sampling is applied to the function V 1 n+1 resulting in the vector V 1 n+1 = [V 1 n+1(x 0 ); V 1 n+1(x 1 ); :::; V 1 n+1(x L )] t : Then, we can evaluate the integral by the following approximation: L j=0 where A = dxf and dx = b a L. V 1 n+1(x j )f(x j )dx = AV 1 n+1; Cumulative Distribution Function Approximation We can rewrite the integral equation as: C n = V 1 n+1(x n+1 )df (x n );

41 31 where F (x) is the cdf, i.e. f(x) is the derivative of F (x), and let a = x 0 < x 1 < ::: < x L = b be again the sampling of the interval [a; b]: Then, we have the approximation: 1 2 L j=1 (F (x j ) F (x j 1 )) ( V 1 n+1(x j ) V 1 n+1(x j 1 ) ) = 1 2 = 1 2 L j=1 j=1 (F (x j ) F (x j 1 )) V 1 n+1(x j ) L j=1 j=0 (F (x j ) F (x j 1 )) V 1 n+1(x j 1 ) L (F (x j ) F (x j 1 )) Vn+1(x 1 j ) + 1 L 1 (F (x j+1 ) F (x j )) V 1 2 n+1(x j ) = 1 2 (F (x 1) F (x 0 )) Vn+1(x 1 0 ) + 1 L 1 (F (x j+1 ) F (x j 1 )) V 1 2 n+1(x j ) j= (F (x L) F (x L 1 )) V 1 n+1(x L ) = 1 2 [F (x 1) F (x 0 ); F (x 2 ) F (x 0 ); :::; F (x L ) F (x L 2 ); F (x L ) F (x L 1 )] V 1 n+1: In Fig. 2.6, we compare the sampling thresholds received from the numerical approaches with these received from the analytic solution. We chose the sampling interval to be [a; b] = [ 3; 3], because of the fact that 99.7% of the values drawn from a normal distribution are within three standard deviations from the mean (known as 3-sigma rule). For a small number of samples, the cdf approximation gives an overestimate of the sampling thresholds, whereas for 50 samples and above it gives better results than the pdf approximation. In any case, the numerical results are very close to the analytic ones.

42 32 Figure 2.6: Numerical Computations for 30, 50 and 100 samples.

43 Autoregressive Model of Order 1 We assume now that the stochastic sequence {x n } follows an Autoregressive model of order 1 (AR(1)). That is: x n = ax n 1 + w n ; where w n is an i.i.d. Gaussian process with zero mean and variance w. 2 Moreover, we assume that the initial value x 0 is known, and for simplicity we set x 0 = 0. As previously, we rst solve the joint problem of nding the optimal estimator and the optimal sampling policy for a single sampling instance, and we then generalize it for multiple samples. Note that we will use as an estimator the optimal estimator for the case of deterministic sampling, and we will determine the optimal sampling policy for this estimator. Optimal Estimator for the AR(1) Model As we have already found in the i.i.d. case, the optimal estimator in the deterministic sampling is: ˆx n = E[x n ]: For the AR(1) model, we can write the above as: E [x n ] = E [ax n 1 + w n ] = ae [x n 1 ] + E [w n ] = ae [x n 1 ] ; since the mean value of the noise is zero. As can be seen, this is a recursive equation, which results in: ˆx n = a n x 0 = 0 for n = 1; 2; :::; 1; and ˆx n = a n x for n = + 1; + 2; :::; N:

44 34 Optimal Sampling Policy for the AR(1) To determine the optimal sampling policy for a single sample, as before, we want to nd: min { [ 1 E N ]} N (x n ˆx n ) 2 : In the AR(1) case, this can be written as: { [ 1 ]} N ( e = min E x 2 n + xn x a n ) 2 n=1 n= { [ N N ( = min E x 2 n + 2xn x a n + x 2 a 2(n ))]} n= n=1 Let's examine the term: [ N ] [ N ] N E x n x a n = E ( x n x a n )1{ = k} = E = E = n= n=1 k=1 [ N n= k=1 n=k [ N N n=1 k=1 n=1 k=1 ] N x n x k a n k 1{ = k} ] n x n x k a n k 1{ = k} n E [ x n x k a n k 1{ = k} ] : The random variable 1{ = k}] depends on x 1 ; x 2 ; :::; x k. Thus, we can write: E [ x n x k a n k 1{ = k} ] = E [ E [ x n x k a n k 1{ = k} ]] x 1 ; :::; x k = E [ E [x n x 1 ; :::; x k ] x k a n k 1{ = k} ] = E [ E [x n x k ] x k a n k 1{ = k} ] = E [ (a n k x k )x k a n k 1{ = k} ] = E [ x 2 ka 2(n k)] ; where in the third equality, we used the Markov property of the AR(1) model.

45 35 Hence, it is: [ N ] [ N ] E x n x a n = E x 2 a 2(n ) : n= n= Substituting the above in e, we get: { [ N ]} N e = min E x 2 n x 2 a 2(n ) n=1 n= { [ N ] [ ]} N = min E x 2 n E x 2 a 2(n ) ; n=1 n= and since E[ N n=1 x2 n] is a constant, we end up with: { [ ]} e = max E x 2 1 a 2(N +1) : 1 a 2 Thus, we have the following backward recurrence: VN(x 1 N ) = x 2 1 a 2(N N+1) N = x 2 1 a 2 N for n = N { V 1 n (x n ) = max x 2 1 a 2(N n+1) n ; E [ V 1 a n+1(x 1 2 n+1 ) ] } x n for n = N 1; N 2; :::; 1: We observe that C n = E [ V 1 n+1(x n+1 ) x n ] is not anymore a constant value, but a deterministic function of x n. Moreover, note that for a = 0, we get the i.i.d. case. In Fig. 2.7, we simulate the functions V n and C n for a time horizon N = 10. We have used the cdf approximation presented before for the sampling interval [ 4; 4], and we have selected a = 0:5 and x = w = 1. An interesting observation is that E [ V 1 n+1(x n+1 ) x n ] are convex functions. Finally, in Fig. 2.8, we plot the sampling thresholds for dierent values of a. We observe that the sampling thresholds grow as the value of a increases.

46 36 Figure 2.7: V n and C n for N = 10 and a = 0:5.

47 37 Figure 2.8: Sampling thresholds of the AR(1) for dierent values of a. Now, we are able to nd the optimal sampling policy for multiple samples. minimal distortion of the two samples would be: The For n = N 1: VN 1(x 2 N 1 ) = x 2 1 a 2(N 1 (N 1)+1) N 1 + E [ V 1 a N(x 1 2 N ) ] x N 1 = x 2 N 1 + E [ ] x 2 N x N 1 = x 2 N 1 + a 2 x 2 N w For n = N 2; :::; 1: { V 2 n (x n ) = max x 2 1 a 2(N 1 n+1) n + E [ V 1 a n+1(x 1 2 n+1 ) ] [ x n ; E V 2 n+1(x n+1 ) ] } x n The only thing that changes is the term that multiplies x 2 n. Specically, it is decreased by 1 due to the fact that if we not transmit any sample until the time instant N 1, then we have to sample at N 1.

48 38 Generalizing the above, we get: { V M+1 n (x n ) = max x 2 1 a 2(N M n+1) n + E [ V 1 a n+1(x M 2 n+1 ) ] [ x n ; E V M+1 n+1 (x n+1 ) ] } x n for n = N M; :::; 1:

49 39 Chapter 3 Conclusions and Future Research We have discussed the Lebesgue sampling strategy and its benecial performance and applicability in NCS. Moreover, we have furnished methods to obtain good sampling policies for the nite horizon ltering problem. We have derived the analytic solution of the sampling thresholds when the stochastic sequence to be kept track of is an i.i.d. Gaussian process. We also provided numerical methods to determine the best sampling policies and their performance for other possible distributions. Finally, we treated the case of an Autoregressive Model of order 1. We conclude that the optimal sampling strategy is the best strategy for minimizing the mean-square estimation distortion error. One possible extension of the problem would be the case where the sensor has access only to noisy observations of the signal instead of perfect observations. That is: x n = ax n 1 + w n y n = bx n + v n ; and we want to estimate x n from the noisy observation y n. Another set of unanswered questions involves the performance of the aforementioned

50 40 sampling policies when we have multiple sensors. Then, we have: X n = AX n 1 + W n Y n = BX n + V n : In this case, we have two possible sampling policies: 1) We sample all components of Y n at the same time (the whole vector Y n ), and 2) To sample each component of Y n independently from the others. In other words, we will employ for each component a separate sampling mechanism. Finally, another possible extension is the case where the samples are not reliably transmitted but may be lost in transmission.

51 41 Bibliography [1] Karl Johan Aström and Bo M. Bernhardsson. Comparison of Riemman and Lebesgue sampling for rst order stochastic systems. Proceedings of the 41st IEEE conference on Decision and Control, 2:2011{2016, [2] Panos Antsaklis and John Baillieul. Special issue on technology of networked control systems. Proceedings of The IEEE, 95:9{28, [3] P. Hespanha, Payam Naghshtabrizi, and Yonggang Xu. A survey of recent results in networked control systems. Proceedings of The IEEE, 95:138{162, [4] Orhan C. Imer and Tamer Basar. Optimal estimation with limited measurements. Int. J. Systems, Control and Communications, 2:5{29, January [5] Ernesto Kofman and Julio H. Braslavsky. Level crossing sampling in feedback stabilization under data-rate constraints. In Conference on Decision and Control, pages 4423{4428, [6] Goran Peskir and Albert Shiryaev. Optimal Stopping and Free-Boundary Problems. Birkhäuser, [7] Maben Rabi, George V. Moustakides, and John S. Baras. Adaptive sampling for linear state estimation. Accepted for publication in the SIAM Journal on Control and Optimization. [8] Maben Rabi, George V. Moustakides, and John S. Baras. Multiple sampling for

52 42 estimation on a nite horizon. In Proceedings of the 45th IEEE Conference on Decision and Control, CDC, San Diego, USA, Dec [9] H. L. Royden. Real Analysis. Macmillan Publishing Company, [10] J. C. So. Delay modeling and controller design for networked control systems. Master's thesis, University of Toronto, [11] Wei Zhang, Michael S. Branicky, and Stephen M. Phillips. Stability of networked control systems stability of networked control systems. IEEE Control Systems Magazine, pages 84{99, [12] Yun-Bo Zhao. Packet-Based Control for Networked Control Systems. PhD thesis, University of Glamorgan, 2008.

Chapter 2 Event-Triggered Sampling

Chapter 2 Event-Triggered Sampling Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels LEI BAO, MIKAEL SKOGLUND AND KARL HENRIK JOHANSSON IR-EE- 26: Stockholm 26 Signal Processing School of Electrical Engineering

More information

arxiv: v1 [cs.sy] 30 Sep 2015

arxiv: v1 [cs.sy] 30 Sep 2015 Optimal Sensor Scheduling and Remote Estimation over an Additive Noise Channel Xiaobin Gao, Emrah Akyol, and Tamer Başar arxiv:1510.00064v1 cs.sy 30 Sep 015 Abstract We consider a sensor scheduling and

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

arxiv: v2 [math.oc] 20 Jul 2011

arxiv: v2 [math.oc] 20 Jul 2011 Adaptive sampling for linear state estimation Maben Rabi George V. Moustakides and John S. Baras arxiv:94.4358v [math.oc Jul 11 Abstract. When a sensor has continuous measurements but sends limited messages

More information

Effects of time quantization and noise in level crossing sampling stabilization

Effects of time quantization and noise in level crossing sampling stabilization Effects of time quantization and noise in level crossing sampling stabilization Julio H. Braslavsky Ernesto Kofman Flavia Felicioni ARC Centre for Complex Dynamic Systems and Control The University of

More information

Communication constraints and latency in Networked Control Systems

Communication constraints and latency in Networked Control Systems Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega

More information

Networked Control System Protocols Modeling & Analysis using Stochastic Impulsive Systems

Networked Control System Protocols Modeling & Analysis using Stochastic Impulsive Systems Networked Control System Protocols Modeling & Analysis using Stochastic Impulsive Systems João P. Hespanha Center for Control Dynamical Systems and Computation Talk outline Examples feedback over shared

More information

Optimal Stopping for Event-triggered sensing and actuation

Optimal Stopping for Event-triggered sensing and actuation Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-, 8 Optimal Stopping for Event-triggered sensing and actuation Maben Rabi, Karl H. Johansson, and Mikael Johansson

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Lei Bao, Mikael Skoglund and Karl Henrik Johansson Department of Signals, Sensors and Systems, Royal Institute of Technology,

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Level Crossing Sampling in Feedback Stabilization under Data-Rate Constraints

Level Crossing Sampling in Feedback Stabilization under Data-Rate Constraints Level Crossing Sampling in Feedback Stabilization under Data-Rate Constraints Ernesto Kofman and Julio H. Braslavsky ARC Centre for Complex Dynamic Systems and Control The University of Newcastle Callaghan,

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Lei Bao, Mikael Skoglund and Karl Henrik Johansson School of Electrical Engineering, Royal Institute of Technology, Stockholm,

More information

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood Optimality of Walrand-Varaiya Type Policies and Approximation Results for Zero-Delay Coding of Markov Sources by Richard G. Wood A thesis submitted to the Department of Mathematics & Statistics in conformity

More information

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels IEEE TRANSACTIONS ON AUTOMATIC CONTROL 1 Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels Lei Bao, Member, IEEE, Mikael Skoglund, Senior Member, IEEE, and Karl Henrik Johansson,

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

Feedback Control CONTROL THEORY FUNDAMENTALS. Feedback Control: A History. Feedback Control: A History (contd.) Anuradha Annaswamy

Feedback Control CONTROL THEORY FUNDAMENTALS. Feedback Control: A History. Feedback Control: A History (contd.) Anuradha Annaswamy Feedback Control CONTROL THEORY FUNDAMENTALS Actuator Sensor + Anuradha Annaswamy Active adaptive Control Laboratory Massachusetts Institute of Technology must follow with» Speed» Accuracy Feeback: Measure

More information

Comparison of Riemann and Lebesgue Sampling for First Order Stochastic Systems

Comparison of Riemann and Lebesgue Sampling for First Order Stochastic Systems Comparison of Riemann and Lebesgue Sampling for First Order Stochastic Systems K. J. Åström Department of Mechanical & Environmental Engineering University of California, Santa Barbara, CA 93 6 astrom@engineering.ucsb.edu

More information

Level-triggered Control of a Scalar Linear System

Level-triggered Control of a Scalar Linear System 27 Mediterranean Conference on Control and Automation, July 27-29, 27, Athens - Greece T36-5 Level-triggered Control of a Scalar Linear System Maben Rabi Automatic Control Laboratory School of Electrical

More information

Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach

Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach 1 Ashutosh Nayyar, Aditya Mahajan and Demosthenis Teneketzis Abstract A general model of decentralized

More information

Event-Triggered Output Feedback Control for Networked Control Systems using Passivity: Time-varying Network Induced Delays

Event-Triggered Output Feedback Control for Networked Control Systems using Passivity: Time-varying Network Induced Delays 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December -5, Event-Triggered Output Feedback Control for Networked Control Systems using Passivity:

More information

QSR-Dissipativity and Passivity Analysis of Event-Triggered Networked Control Cyber-Physical Systems

QSR-Dissipativity and Passivity Analysis of Event-Triggered Networked Control Cyber-Physical Systems QSR-Dissipativity and Passivity Analysis of Event-Triggered Networked Control Cyber-Physical Systems arxiv:1607.00553v1 [math.oc] 2 Jul 2016 Technical Report of the ISIS Group at the University of Notre

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

WeA9.3. Proceedings of the European Control Conference 2009 Budapest, Hungary, August 23 26, 2009

WeA9.3. Proceedings of the European Control Conference 2009 Budapest, Hungary, August 23 26, 2009 Proceedings of the European Control Conference 29 Budapest, Hungary, August 23 26, 29 WeA9.3 Scheduling Packets packetsfor Event-Triggered event-triggered control Control Maben Rabi, Karl H. Johansson

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

A POMDP Framework for Cognitive MAC Based on Primary Feedback Exploitation

A POMDP Framework for Cognitive MAC Based on Primary Feedback Exploitation A POMDP Framework for Cognitive MAC Based on Primary Feedback Exploitation Karim G. Seddik and Amr A. El-Sherif 2 Electronics and Communications Engineering Department, American University in Cairo, New

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

STATE AND OUTPUT FEEDBACK CONTROL IN MODEL-BASED NETWORKED CONTROL SYSTEMS

STATE AND OUTPUT FEEDBACK CONTROL IN MODEL-BASED NETWORKED CONTROL SYSTEMS SAE AND OUPU FEEDBACK CONROL IN MODEL-BASED NEWORKED CONROL SYSEMS Luis A Montestruque, Panos J Antsalis Abstract In this paper the control of a continuous linear plant where the sensor is connected to

More information

Networked Control Systems: Estimation and Control over Lossy Networks

Networked Control Systems: Estimation and Control over Lossy Networks Noname manuscript No. (will be inserted by the editor) Networked Control Systems: Estimation and Control over Lossy Networks João P. Hespanha Alexandre R. Mesquita the date of receipt and acceptance should

More information

Degradable Agreement in the Presence of. Byzantine Faults. Nitin H. Vaidya. Technical Report #

Degradable Agreement in the Presence of. Byzantine Faults. Nitin H. Vaidya. Technical Report # Degradable Agreement in the Presence of Byzantine Faults Nitin H. Vaidya Technical Report # 92-020 Abstract Consider a system consisting of a sender that wants to send a value to certain receivers. Byzantine

More information

`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County

`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County revision2: 9/4/'93 `First Come, First Served' can be unstable! Thomas I. Seidman Department of Mathematics and Statistics University of Maryland Baltimore County Baltimore, MD 21228, USA e-mail: hseidman@math.umbc.edui

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Estimating a linear process using phone calls

Estimating a linear process using phone calls Estimating a linear process using phone calls Mohammad Javad Khojasteh, Massimo Franceschetti, Gireeja Ranade Abstract We consider the problem of estimating an undisturbed, scalar, linear process over

More information

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin Multiplicative Multifractal Modeling of Long-Range-Dependent (LRD) Trac in Computer Communications Networks Jianbo Gao and Izhak Rubin Electrical Engineering Department, University of California, Los Angeles

More information

Analog Neural Nets with Gaussian or other Common. Noise Distributions cannot Recognize Arbitrary. Regular Languages.

Analog Neural Nets with Gaussian or other Common. Noise Distributions cannot Recognize Arbitrary. Regular Languages. Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognize Arbitrary Regular Languages Wolfgang Maass Inst. for Theoretical Computer Science, Technische Universitat Graz Klosterwiesgasse

More information

Switched Systems: Mixing Logic with Differential Equations

Switched Systems: Mixing Logic with Differential Equations research supported by NSF Switched Systems: Mixing Logic with Differential Equations João P. Hespanha Center for Control Dynamical Systems and Computation Outline Logic-based switched systems framework

More information

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820 Joint Optimum Bitwise Decomposition of any Memoryless Source to be Sent over a BSC Seyed Bahram Zahir Azami, Pierre Duhamel 2 and Olivier Rioul 3 cole Nationale Superieure des Telecommunications URA CNRS

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part 9. A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part Sandip Roy Ali Saberi Kristin Herlugson Abstract This is the second of a two-part paper describing a control-theoretic

More information

Chapter 9 Robust Stability in SISO Systems 9. Introduction There are many reasons to use feedback control. As we have seen earlier, with the help of a

Chapter 9 Robust Stability in SISO Systems 9. Introduction There are many reasons to use feedback control. As we have seen earlier, with the help of a Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 9 Robust

More information

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard ME 132, Dynamic Systems and Feedback Class Notes by Andrew Packard, Kameshwar Poolla & Roberto Horowitz Spring 2005 Instructor: Prof. A Packard Department of Mechanical Engineering University of California

More information

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati Lectures on Dynamic Systems and ontrol Mohammed Dahleh Munther Dahleh George Verghese Department of Electrical Engineering and omputer Science Massachuasetts Institute of Technology c hapter 8 Simulation/Realization

More information

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV The Adaptive Cross Validation Method - applied to polling schemes Anders Svensson and Johan M Karlsson Department of Communication Systems Lund Institute of Technology P. O. Box 118, 22100 Lund, Sweden

More information

Packet-loss Dependent Controller Design for Networked Control Systems via Switched System Approach

Packet-loss Dependent Controller Design for Networked Control Systems via Switched System Approach Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 8 WeC6.3 Packet-loss Dependent Controller Design for Networked Control Systems via Switched System Approach Junyan

More information

80DB A 40DB 0DB DB

80DB A 40DB 0DB DB Stability Analysis On the Nichols Chart and Its Application in QFT Wenhua Chen and Donald J. Ballance Centre for Systems & Control Department of Mechanical Engineering University of Glasgow Glasgow G12

More information

AN EVENT-TRIGGERED TRANSMISSION POLICY FOR NETWORKED L 2 -GAIN CONTROL

AN EVENT-TRIGGERED TRANSMISSION POLICY FOR NETWORKED L 2 -GAIN CONTROL 4 Journal of Marine Science and echnology, Vol. 3, No., pp. 4-9 () DOI:.69/JMS-3-3-3 AN EVEN-RIGGERED RANSMISSION POLICY FOR NEWORKED L -GAIN CONROL Jenq-Lang Wu, Yuan-Chang Chang, Xin-Hong Chen, and su-ian

More information

On Separation Principle for a Class of Networked Control Systems

On Separation Principle for a Class of Networked Control Systems On Separation Principle for a Class of Networked Control Systems Dongxiao Wu Jun Wu and Sheng Chen Abstract In this contribution we investigate a class of observer-based discrete-time networked control

More information

Examples. 2-input, 1-output discrete-time systems: 1-input, 1-output discrete-time systems:

Examples. 2-input, 1-output discrete-time systems: 1-input, 1-output discrete-time systems: Discrete-Time s - I Time-Domain Representation CHAPTER 4 These lecture slides are based on "Digital Signal Processing: A Computer-Based Approach, 4th ed." textbook by S.K. Mitra and its instructor materials.

More information

Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions

Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions Pulkit Grover, Carnegie Mellon University Abstract The concept of information-structures in decentralized control

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Optimal Communication Logics in Networked Control Systems

Optimal Communication Logics in Networked Control Systems Optimal Communication Logics in Networked Control Systems Yonggang Xu João P. Hespanha Dept. of Electrical and Computer Eng., Univ. of California, Santa Barbara, CA 9306 Abstract This paper addresses the

More information

Comparison of Periodic and Event Based Sampling for First-Order Stochastic Systems

Comparison of Periodic and Event Based Sampling for First-Order Stochastic Systems Comparison of Periodic and Event Based Sampling for First-Order Stochastic Systems Bernhardsson, Bo; Åström, Karl Johan Published: 1999-1-1 Link to publication Citation for published version (APA): Bernhardsson,

More information

Chapter 5 A Modified Scheduling Algorithm for The FIP Fieldbus System

Chapter 5 A Modified Scheduling Algorithm for The FIP Fieldbus System Chapter 5 A Modified Scheduling Algorithm for The FIP Fieldbus System As we stated before FIP is one of the fieldbus systems, these systems usually consist of many control loops that communicate and interact

More information

Change-point models and performance measures for sequential change detection

Change-point models and performance measures for sequential change detection Change-point models and performance measures for sequential change detection Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece moustaki@upatras.gr George V. Moustakides

More information

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801 Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y

More information

P e = 0.1. P e = 0.01

P e = 0.1. P e = 0.01 23 10 0 10-2 P e = 0.1 Deadline Failure Probability 10-4 10-6 10-8 P e = 0.01 10-10 P e = 0.001 10-12 10 11 12 13 14 15 16 Number of Slots in a Frame Fig. 10. The deadline failure probability as a function

More information

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033

More information

CDS 270-2: Lecture 6-1 Towards a Packet-based Control Theory

CDS 270-2: Lecture 6-1 Towards a Packet-based Control Theory Goals: CDS 270-2: Lecture 6-1 Towards a Packet-based Control Theory Ling Shi May 1 2006 - Describe main issues with a packet-based control system - Introduce common models for a packet-based control system

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

University of California Department of Mechanical Engineering ECE230A/ME243A Linear Systems Fall 1999 (B. Bamieh ) Lecture 3: Simulation/Realization 1

University of California Department of Mechanical Engineering ECE230A/ME243A Linear Systems Fall 1999 (B. Bamieh ) Lecture 3: Simulation/Realization 1 University of alifornia Department of Mechanical Engineering EE/ME Linear Systems Fall 999 ( amieh ) Lecture : Simulation/Realization Given an nthorder statespace description of the form _x(t) f (x(t)

More information

CONTINUOUS TIME D=0 ZOH D 0 D=0 FOH D 0

CONTINUOUS TIME D=0 ZOH D 0 D=0 FOH D 0 IDENTIFICATION ASPECTS OF INTER- SAMPLE INPUT BEHAVIOR T. ANDERSSON, P. PUCAR and L. LJUNG University of Linkoping, Department of Electrical Engineering, S-581 83 Linkoping, Sweden Abstract. In this contribution

More information

Trajectory planning and feedforward design for electromechanical motion systems version 2

Trajectory planning and feedforward design for electromechanical motion systems version 2 2 Trajectory planning and feedforward design for electromechanical motion systems version 2 Report nr. DCT 2003-8 Paul Lambrechts Email: P.F.Lambrechts@tue.nl April, 2003 Abstract This report considers

More information

MOST control systems are designed under the assumption

MOST control systems are designed under the assumption 2076 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 9, OCTOBER 2008 Lyapunov-Based Model Predictive Control of Nonlinear Systems Subject to Data Losses David Muñoz de la Peña and Panagiotis D. Christofides

More information

Controlo Switched Systems: Mixing Logic with Differential Equations. João P. Hespanha. University of California at Santa Barbara.

Controlo Switched Systems: Mixing Logic with Differential Equations. João P. Hespanha. University of California at Santa Barbara. Controlo 00 5 th Portuguese Conference on Automatic Control University of Aveiro,, September 5-7, 5 00 Switched Systems: Mixing Logic with Differential Equations João P. Hespanha University of California

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi.

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi. Optimal Rejuvenation for Tolerating Soft Failures Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi Abstract In the paper we address the problem of determining the optimal time

More information

BECAS de PROYECTOS CONTROL DE SISTEMAS A TRAVÉS DE REDES DE COMUNICACIÓN.

BECAS de PROYECTOS CONTROL DE SISTEMAS A TRAVÉS DE REDES DE COMUNICACIÓN. BECAS de PROYECTOS CONTROL DE SISTEMAS A TRAVÉS DE REDES DE COMUNICACIÓN. Perfil: Ingeniero Industrial, Telecomunicación, Automática y Electrónica Industrial, con proyecto Fin de Carrera terminado. Entregar

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

Capacity of a Two-way Function Multicast Channel

Capacity of a Two-way Function Multicast Channel Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over

More information

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Optimal Decentralized Control of Coupled Subsystems With Control Sharing IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 58, NO. 9, SEPTEMBER 2013 2377 Optimal Decentralized Control of Coupled Subsystems With Control Sharing Aditya Mahajan, Member, IEEE Abstract Subsystems that

More information

Stochastic Hybrid Systems: Modeling, analysis, and applications to networks and biology

Stochastic Hybrid Systems: Modeling, analysis, and applications to networks and biology research supported by NSF Stochastic Hybrid Systems: Modeling, analysis, and applications to networks and biology João P. Hespanha Center for Control Engineering and Computation University of California

More information

Feng Lin. Abstract. Inspired by thewell-known motto of Henry David Thoreau [1], that government

Feng Lin. Abstract. Inspired by thewell-known motto of Henry David Thoreau [1], that government That Supervisor Is Best Which Supervises Least Feng Lin Department of Electrical and Computer Engineering Wayne State University, Detroit, MI 48202 Abstract Inspired by thewell-known motto of Henry David

More information

Overview of the Seminar Topic

Overview of the Seminar Topic Overview of the Seminar Topic Simo Särkkä Laboratory of Computational Engineering Helsinki University of Technology September 17, 2007 Contents 1 What is Control Theory? 2 History

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER 2012 4509 Cooperative Sequential Spectrum Sensing Based on Level-Triggered Sampling Yasin Yilmaz,StudentMember,IEEE, George V. Moustakides,SeniorMember,IEEE,and

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths

More information

STA 414/2104: Machine Learning

STA 414/2104: Machine Learning STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

UNIVERSITY of CALIFORNIA Santa Barbara. Communication scheduling methods for estimation over networks

UNIVERSITY of CALIFORNIA Santa Barbara. Communication scheduling methods for estimation over networks UNIVERSITY of CALIFORNIA Santa Barbara Communication scheduling methods for estimation over networks A Dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

1 Modelling and Simulation

1 Modelling and Simulation 1 Modelling and Simulation 1.1 Introduction This course teaches various aspects of computer-aided modelling for the performance evaluation of computer systems and communication networks. The performance

More information

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter Stability

More information

CHAPTER 8 Viterbi Decoding of Convolutional Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes

More information

RECENT advances in technology have led to increased activity

RECENT advances in technology have led to increased activity IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 9, SEPTEMBER 2004 1549 Stochastic Linear Control Over a Communication Channel Sekhar Tatikonda, Member, IEEE, Anant Sahai, Member, IEEE, and Sanjoy Mitter,

More information

Anytime Capacity of the AWGN+Erasure Channel with Feedback. Qing Xu. B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000

Anytime Capacity of the AWGN+Erasure Channel with Feedback. Qing Xu. B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000 Anytime Capacity of the AWGN+Erasure Channel with Feedback by Qing Xu B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000 A dissertation submitted in partial satisfaction of

More information

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 9 Observers,

More information

I 2 (t) R (t) R 1 (t) = R 0 (t) B 1 (t) R 2 (t) B b (t) = N f. C? I 1 (t) R b (t) N b. Acknowledgements

I 2 (t) R (t) R 1 (t) = R 0 (t) B 1 (t) R 2 (t) B b (t) = N f. C? I 1 (t) R b (t) N b. Acknowledgements Proc. 34th Allerton Conf. on Comm., Cont., & Comp., Monticello, IL, Oct., 1996 1 Service Guarantees for Window Flow Control 1 R. L. Cruz C. M. Okino Department of Electrical & Computer Engineering University

More information

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Statistical regularity Properties of relative frequency

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Common Knowledge and Sequential Team Problems

Common Knowledge and Sequential Team Problems Common Knowledge and Sequential Team Problems Authors: Ashutosh Nayyar and Demosthenis Teneketzis Computer Engineering Technical Report Number CENG-2018-02 Ming Hsieh Department of Electrical Engineering

More information

Queueing Theory and Simulation. Introduction

Queueing Theory and Simulation. Introduction Queueing Theory and Simulation Based on the slides of Dr. Dharma P. Agrawal, University of Cincinnati and Dr. Hiroyuki Ohsaki Graduate School of Information Science & Technology, Osaka University, Japan

More information